00:00:00.000 Started by upstream project "autotest-per-patch" build number 132831 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.094 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.095 The recommended git tool is: git 00:00:00.095 using credential 00000000-0000-0000-0000-000000000002 00:00:00.097 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.155 Fetching changes from the remote Git repository 00:00:00.156 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.218 Using shallow fetch with depth 1 00:00:00.218 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.218 > git --version # timeout=10 00:00:00.267 > git --version # 'git version 2.39.2' 00:00:00.267 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.301 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.301 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.163 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.175 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.186 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.186 > git config core.sparsecheckout # timeout=10 00:00:06.197 > git read-tree -mu HEAD # timeout=10 00:00:06.212 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.235 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.235 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.324 [Pipeline] Start of Pipeline 00:00:06.338 [Pipeline] library 00:00:06.340 Loading library shm_lib@master 00:00:06.340 Library shm_lib@master is cached. Copying from home. 00:00:06.361 [Pipeline] node 00:00:06.382 Running on WFP3 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.384 [Pipeline] { 00:00:06.393 [Pipeline] catchError 00:00:06.395 [Pipeline] { 00:00:06.404 [Pipeline] wrap 00:00:06.410 [Pipeline] { 00:00:06.415 [Pipeline] stage 00:00:06.416 [Pipeline] { (Prologue) 00:00:06.675 [Pipeline] sh 00:00:06.959 + logger -p user.info -t JENKINS-CI 00:00:06.976 [Pipeline] echo 00:00:06.977 Node: WFP3 00:00:06.984 [Pipeline] sh 00:00:07.283 [Pipeline] setCustomBuildProperty 00:00:07.295 [Pipeline] echo 00:00:07.297 Cleanup processes 00:00:07.302 [Pipeline] sh 00:00:07.583 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.583 1353313 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.595 [Pipeline] sh 00:00:07.900 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.901 ++ grep -v 'sudo pgrep' 00:00:07.901 ++ awk '{print $1}' 00:00:07.901 + sudo kill -9 00:00:07.901 + true 00:00:07.913 [Pipeline] cleanWs 00:00:07.921 [WS-CLEANUP] Deleting project workspace... 00:00:07.921 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.928 [WS-CLEANUP] done 00:00:07.931 [Pipeline] setCustomBuildProperty 00:00:07.942 [Pipeline] sh 00:00:08.222 + sudo git config --global --replace-all safe.directory '*' 00:00:08.318 [Pipeline] httpRequest 00:00:08.743 [Pipeline] echo 00:00:08.745 Sorcerer 10.211.164.112 is alive 00:00:08.754 [Pipeline] retry 00:00:08.756 [Pipeline] { 00:00:08.769 [Pipeline] httpRequest 00:00:08.773 HttpMethod: GET 00:00:08.774 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.774 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.799 Response Code: HTTP/1.1 200 OK 00:00:08.799 Success: Status code 200 is in the accepted range: 200,404 00:00:08.800 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.904 [Pipeline] } 00:00:11.920 [Pipeline] // retry 00:00:11.926 [Pipeline] sh 00:00:12.207 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.225 [Pipeline] httpRequest 00:00:12.974 [Pipeline] echo 00:00:12.975 Sorcerer 10.211.164.112 is alive 00:00:12.985 [Pipeline] retry 00:00:12.988 [Pipeline] { 00:00:13.003 [Pipeline] httpRequest 00:00:13.007 HttpMethod: GET 00:00:13.007 URL: http://10.211.164.112/packages/spdk_02d0d9b38e3e9670b2b9f2f9ea5033e06dd35d24.tar.gz 00:00:13.008 Sending request to url: http://10.211.164.112/packages/spdk_02d0d9b38e3e9670b2b9f2f9ea5033e06dd35d24.tar.gz 00:00:13.032 Response Code: HTTP/1.1 200 OK 00:00:13.033 Success: Status code 200 is in the accepted range: 200,404 00:00:13.033 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_02d0d9b38e3e9670b2b9f2f9ea5033e06dd35d24.tar.gz 00:02:06.169 [Pipeline] } 00:02:06.187 [Pipeline] // retry 00:02:06.194 [Pipeline] sh 00:02:06.480 + tar --no-same-owner -xf spdk_02d0d9b38e3e9670b2b9f2f9ea5033e06dd35d24.tar.gz 00:02:09.031 [Pipeline] sh 00:02:09.317 + git -C spdk log --oneline -n5 00:02:09.317 02d0d9b38 test/check_so_deps: use VERSION to look for prior tags 00:02:09.317 3ac4f97e3 build: use VERSION file for storing version 00:02:09.317 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:02:09.317 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:02:09.317 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:02:09.328 [Pipeline] } 00:02:09.341 [Pipeline] // stage 00:02:09.351 [Pipeline] stage 00:02:09.353 [Pipeline] { (Prepare) 00:02:09.368 [Pipeline] writeFile 00:02:09.383 [Pipeline] sh 00:02:09.667 + logger -p user.info -t JENKINS-CI 00:02:09.680 [Pipeline] sh 00:02:09.966 + logger -p user.info -t JENKINS-CI 00:02:09.975 [Pipeline] sh 00:02:10.256 + cat autorun-spdk.conf 00:02:10.257 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.257 SPDK_TEST_NVMF=1 00:02:10.257 SPDK_TEST_NVME_CLI=1 00:02:10.257 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.257 SPDK_TEST_NVMF_NICS=e810 00:02:10.257 SPDK_TEST_VFIOUSER=1 00:02:10.257 SPDK_RUN_UBSAN=1 00:02:10.257 NET_TYPE=phy 00:02:10.264 RUN_NIGHTLY=0 00:02:10.269 [Pipeline] readFile 00:02:10.292 [Pipeline] withEnv 00:02:10.294 [Pipeline] { 00:02:10.305 [Pipeline] sh 00:02:10.591 + set -ex 00:02:10.591 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:10.591 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:10.591 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.591 ++ SPDK_TEST_NVMF=1 00:02:10.591 ++ SPDK_TEST_NVME_CLI=1 00:02:10.591 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.591 ++ SPDK_TEST_NVMF_NICS=e810 00:02:10.591 ++ SPDK_TEST_VFIOUSER=1 00:02:10.591 ++ SPDK_RUN_UBSAN=1 00:02:10.591 ++ NET_TYPE=phy 00:02:10.591 ++ RUN_NIGHTLY=0 00:02:10.591 + case $SPDK_TEST_NVMF_NICS in 00:02:10.591 + DRIVERS=ice 00:02:10.591 + [[ tcp == \r\d\m\a ]] 00:02:10.591 + [[ -n ice ]] 00:02:10.591 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:10.591 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:10.591 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:10.591 rmmod: ERROR: Module i40iw is not currently loaded 00:02:10.591 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:10.591 + true 00:02:10.591 + for D in $DRIVERS 00:02:10.591 + sudo modprobe ice 00:02:10.591 + exit 0 00:02:10.601 [Pipeline] } 00:02:10.615 [Pipeline] // withEnv 00:02:10.620 [Pipeline] } 00:02:10.634 [Pipeline] // stage 00:02:10.643 [Pipeline] catchError 00:02:10.645 [Pipeline] { 00:02:10.659 [Pipeline] timeout 00:02:10.659 Timeout set to expire in 1 hr 0 min 00:02:10.661 [Pipeline] { 00:02:10.675 [Pipeline] stage 00:02:10.678 [Pipeline] { (Tests) 00:02:10.692 [Pipeline] sh 00:02:10.978 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:10.978 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:10.978 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:10.978 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:10.978 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.978 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:10.978 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:10.978 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:10.978 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:10.978 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:10.978 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:10.978 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:10.978 + source /etc/os-release 00:02:10.978 ++ NAME='Fedora Linux' 00:02:10.978 ++ VERSION='39 (Cloud Edition)' 00:02:10.978 ++ ID=fedora 00:02:10.978 ++ VERSION_ID=39 00:02:10.978 ++ VERSION_CODENAME= 00:02:10.978 ++ PLATFORM_ID=platform:f39 00:02:10.978 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:10.978 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:10.978 ++ LOGO=fedora-logo-icon 00:02:10.978 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:10.978 ++ HOME_URL=https://fedoraproject.org/ 00:02:10.978 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:10.978 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:10.978 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:10.978 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:10.978 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:10.978 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:10.978 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:10.978 ++ SUPPORT_END=2024-11-12 00:02:10.978 ++ VARIANT='Cloud Edition' 00:02:10.978 ++ VARIANT_ID=cloud 00:02:10.978 + uname -a 00:02:10.978 Linux spdk-wfp-03 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:02:10.978 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:14.273 Hugepages 00:02:14.273 node hugesize free / total 00:02:14.273 node0 1048576kB 0 / 0 00:02:14.273 node0 2048kB 0 / 0 00:02:14.274 node1 1048576kB 0 / 0 00:02:14.274 node1 2048kB 0 / 0 00:02:14.274 00:02:14.274 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:14.274 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:14.274 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:14.274 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:14.274 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:14.274 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:14.274 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:14.274 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:14.274 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:14.274 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:14.274 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:02:14.274 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:14.274 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:14.274 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:14.274 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:14.274 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:14.274 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:14.274 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:14.274 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:14.274 + rm -f /tmp/spdk-ld-path 00:02:14.274 + source autorun-spdk.conf 00:02:14.274 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.274 ++ SPDK_TEST_NVMF=1 00:02:14.274 ++ SPDK_TEST_NVME_CLI=1 00:02:14.274 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:14.274 ++ SPDK_TEST_NVMF_NICS=e810 00:02:14.274 ++ SPDK_TEST_VFIOUSER=1 00:02:14.274 ++ SPDK_RUN_UBSAN=1 00:02:14.274 ++ NET_TYPE=phy 00:02:14.274 ++ RUN_NIGHTLY=0 00:02:14.274 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:14.274 + [[ -n '' ]] 00:02:14.274 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.274 + for M in /var/spdk/build-*-manifest.txt 00:02:14.274 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:14.274 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:14.274 + for M in /var/spdk/build-*-manifest.txt 00:02:14.274 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:14.274 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:14.274 + for M in /var/spdk/build-*-manifest.txt 00:02:14.274 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:14.274 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:14.274 ++ uname 00:02:14.274 + [[ Linux == \L\i\n\u\x ]] 00:02:14.274 + sudo dmesg -T 00:02:14.274 + sudo dmesg --clear 00:02:14.274 + dmesg_pid=1354894 00:02:14.274 + [[ Fedora Linux == FreeBSD ]] 00:02:14.274 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.274 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.274 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:14.274 + [[ -x /usr/src/fio-static/fio ]] 00:02:14.274 + export FIO_BIN=/usr/src/fio-static/fio 00:02:14.274 + FIO_BIN=/usr/src/fio-static/fio 00:02:14.274 + sudo dmesg -Tw 00:02:14.274 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:14.274 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:14.274 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:14.274 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:14.274 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:14.274 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:14.274 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:14.274 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:14.274 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:14.274 14:04:14 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:14.274 14:04:14 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:14.274 14:04:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.274 14:04:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:14.274 14:04:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:14.274 14:04:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:14.274 14:04:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:14.274 14:04:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:14.274 14:04:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:14.274 14:04:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:14.274 14:04:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:14.274 14:04:14 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:14.274 14:04:14 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:14.274 14:04:14 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:14.274 14:04:14 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:14.274 14:04:14 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:14.274 14:04:14 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:14.274 14:04:14 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:14.274 14:04:14 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:14.274 14:04:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.274 14:04:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.274 14:04:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.274 14:04:14 -- paths/export.sh@5 -- $ export PATH 00:02:14.274 14:04:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.274 14:04:14 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:14.274 Traceback (most recent call last): 00:02:14.274 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py", line 24, in 00:02:14.274 import spdk.rpc as rpc # noqa 00:02:14.274 ^^^^^^^^^^^^^^^^^^^^^^ 00:02:14.274 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python/spdk/__init__.py", line 5, in 00:02:14.274 from .version import __version__ 00:02:14.274 ModuleNotFoundError: No module named 'spdk.version' 00:02:14.274 14:04:14 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:14.274 14:04:14 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733835854.XXXXXX 00:02:14.274 14:04:14 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733835854.bX5Sog 00:02:14.274 14:04:14 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:14.274 14:04:14 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:14.274 14:04:14 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:14.274 14:04:14 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:14.274 14:04:14 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:14.274 14:04:14 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:14.274 14:04:14 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:14.274 14:04:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.274 14:04:14 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:14.274 14:04:14 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:14.274 14:04:14 -- pm/common@17 -- $ local monitor 00:02:14.274 14:04:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.274 14:04:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.274 14:04:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.274 14:04:14 -- pm/common@21 -- $ date +%s 00:02:14.274 14:04:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.274 14:04:14 -- pm/common@21 -- $ date +%s 00:02:14.274 14:04:14 -- pm/common@25 -- $ sleep 1 00:02:14.274 14:04:14 -- pm/common@21 -- $ date +%s 00:02:14.274 14:04:14 -- pm/common@21 -- $ date +%s 00:02:14.274 14:04:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733835855 00:02:14.274 14:04:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733835855 00:02:14.274 14:04:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733835855 00:02:14.274 14:04:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733835855 00:02:14.534 Traceback (most recent call last): 00:02:14.534 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py", line 24, in 00:02:14.534 import spdk.rpc as rpc # noqa 00:02:14.534 ^^^^^^^^^^^^^^^^^^^^^^ 00:02:14.534 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python/spdk/__init__.py", line 5, in 00:02:14.534 from .version import __version__ 00:02:14.534 ModuleNotFoundError: No module named 'spdk.version' 00:02:14.534 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733835855_collect-cpu-load.pm.log 00:02:14.534 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733835855_collect-vmstat.pm.log 00:02:14.534 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733835855_collect-cpu-temp.pm.log 00:02:14.534 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733835855_collect-bmc-pm.bmc.pm.log 00:02:15.472 14:04:16 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:15.472 14:04:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:15.472 14:04:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:15.472 14:04:16 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:15.472 14:04:16 -- spdk/autobuild.sh@16 -- $ date -u 00:02:15.472 Tue Dec 10 01:04:16 PM UTC 2024 00:02:15.472 14:04:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:15.472 v25.01-pre-305-g02d0d9b38 00:02:15.472 14:04:16 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:15.472 14:04:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:15.472 14:04:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:15.472 14:04:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:15.472 14:04:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:15.472 14:04:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.472 ************************************ 00:02:15.472 START TEST ubsan 00:02:15.472 ************************************ 00:02:15.472 14:04:16 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:15.472 using ubsan 00:02:15.472 00:02:15.472 real 0m0.000s 00:02:15.472 user 0m0.000s 00:02:15.472 sys 0m0.000s 00:02:15.472 14:04:16 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:15.472 14:04:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:15.472 ************************************ 00:02:15.472 END TEST ubsan 00:02:15.472 ************************************ 00:02:15.472 14:04:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:15.472 14:04:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:15.472 14:04:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:15.472 14:04:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:15.472 14:04:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:15.472 14:04:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:15.472 14:04:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:15.472 14:04:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:15.472 14:04:16 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:15.732 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:15.732 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:15.991 Using 'verbs' RDMA provider 00:02:28.777 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:40.997 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:40.997 Creating mk/config.mk...done. 00:02:40.997 Creating mk/cc.flags.mk...done. 00:02:40.997 Type 'make' to build. 00:02:40.997 14:04:41 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:40.997 14:04:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:40.997 14:04:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:40.997 14:04:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.997 ************************************ 00:02:40.997 START TEST make 00:02:40.997 ************************************ 00:02:40.997 14:04:41 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:42.915 The Meson build system 00:02:42.915 Version: 1.5.0 00:02:42.915 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:42.915 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:42.915 Build type: native build 00:02:42.915 Project name: libvfio-user 00:02:42.915 Project version: 0.0.1 00:02:42.915 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:42.915 C linker for the host machine: cc ld.bfd 2.40-14 00:02:42.915 Host machine cpu family: x86_64 00:02:42.915 Host machine cpu: x86_64 00:02:42.915 Run-time dependency threads found: YES 00:02:42.915 Library dl found: YES 00:02:42.915 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:42.915 Run-time dependency json-c found: YES 0.17 00:02:42.915 Run-time dependency cmocka found: YES 1.1.7 00:02:42.915 Program pytest-3 found: NO 00:02:42.915 Program flake8 found: NO 00:02:42.915 Program misspell-fixer found: NO 00:02:42.915 Program restructuredtext-lint found: NO 00:02:42.915 Program valgrind found: YES (/usr/bin/valgrind) 00:02:42.915 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:42.915 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:42.915 Compiler for C supports arguments -Wwrite-strings: YES 00:02:42.915 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:42.915 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:42.915 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:42.915 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:42.915 Build targets in project: 8 00:02:42.915 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:42.915 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:42.915 00:02:42.915 libvfio-user 0.0.1 00:02:42.915 00:02:42.915 User defined options 00:02:42.915 buildtype : debug 00:02:42.915 default_library: shared 00:02:42.915 libdir : /usr/local/lib 00:02:42.915 00:02:42.915 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:43.483 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:43.483 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:43.483 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:43.483 [3/37] Compiling C object samples/null.p/null.c.o 00:02:43.483 [4/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:43.483 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:43.483 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:43.483 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:43.483 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:43.483 [9/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:43.483 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:43.483 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:43.483 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:43.483 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:43.483 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:43.483 [15/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:43.483 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:43.483 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:43.483 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:43.483 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:43.483 [20/37] Compiling C object samples/server.p/server.c.o 00:02:43.483 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:43.483 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:43.483 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:43.483 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:43.483 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:43.483 [26/37] Compiling C object samples/client.p/client.c.o 00:02:43.483 [27/37] Linking target samples/client 00:02:43.483 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:43.741 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:43.741 [30/37] Linking target test/unit_tests 00:02:43.741 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:43.741 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:43.741 [33/37] Linking target samples/server 00:02:43.741 [34/37] Linking target samples/null 00:02:43.741 [35/37] Linking target samples/gpio-pci-idio-16 00:02:43.741 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:43.741 [37/37] Linking target samples/lspci 00:02:43.741 INFO: autodetecting backend as ninja 00:02:43.741 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:44.000 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:44.260 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:44.260 ninja: no work to do. 00:02:49.539 The Meson build system 00:02:49.539 Version: 1.5.0 00:02:49.539 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:49.539 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:49.539 Build type: native build 00:02:49.539 Program cat found: YES (/usr/bin/cat) 00:02:49.539 Project name: DPDK 00:02:49.539 Project version: 24.03.0 00:02:49.539 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:49.539 C linker for the host machine: cc ld.bfd 2.40-14 00:02:49.539 Host machine cpu family: x86_64 00:02:49.539 Host machine cpu: x86_64 00:02:49.539 Message: ## Building in Developer Mode ## 00:02:49.539 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:49.539 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:49.539 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:49.539 Program python3 found: YES (/usr/bin/python3) 00:02:49.539 Program cat found: YES (/usr/bin/cat) 00:02:49.539 Compiler for C supports arguments -march=native: YES 00:02:49.539 Checking for size of "void *" : 8 00:02:49.539 Checking for size of "void *" : 8 (cached) 00:02:49.539 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:49.539 Library m found: YES 00:02:49.539 Library numa found: YES 00:02:49.539 Has header "numaif.h" : YES 00:02:49.539 Library fdt found: NO 00:02:49.539 Library execinfo found: NO 00:02:49.539 Has header "execinfo.h" : YES 00:02:49.539 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:49.539 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:49.539 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:49.539 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:49.539 Run-time dependency openssl found: YES 3.1.1 00:02:49.539 Run-time dependency libpcap found: YES 1.10.4 00:02:49.539 Has header "pcap.h" with dependency libpcap: YES 00:02:49.539 Compiler for C supports arguments -Wcast-qual: YES 00:02:49.539 Compiler for C supports arguments -Wdeprecated: YES 00:02:49.539 Compiler for C supports arguments -Wformat: YES 00:02:49.539 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:49.539 Compiler for C supports arguments -Wformat-security: NO 00:02:49.539 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:49.539 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:49.539 Compiler for C supports arguments -Wnested-externs: YES 00:02:49.539 Compiler for C supports arguments -Wold-style-definition: YES 00:02:49.539 Compiler for C supports arguments -Wpointer-arith: YES 00:02:49.539 Compiler for C supports arguments -Wsign-compare: YES 00:02:49.539 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:49.539 Compiler for C supports arguments -Wundef: YES 00:02:49.539 Compiler for C supports arguments -Wwrite-strings: YES 00:02:49.539 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:49.539 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:49.539 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:49.540 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:49.540 Program objdump found: YES (/usr/bin/objdump) 00:02:49.540 Compiler for C supports arguments -mavx512f: YES 00:02:49.540 Checking if "AVX512 checking" compiles: YES 00:02:49.540 Fetching value of define "__SSE4_2__" : 1 00:02:49.540 Fetching value of define "__AES__" : 1 00:02:49.540 Fetching value of define "__AVX__" : 1 00:02:49.540 Fetching value of define "__AVX2__" : 1 00:02:49.540 Fetching value of define "__AVX512BW__" : 1 00:02:49.540 Fetching value of define "__AVX512CD__" : 1 00:02:49.540 Fetching value of define "__AVX512DQ__" : 1 00:02:49.540 Fetching value of define "__AVX512F__" : 1 00:02:49.540 Fetching value of define "__AVX512VL__" : 1 00:02:49.540 Fetching value of define "__PCLMUL__" : 1 00:02:49.540 Fetching value of define "__RDRND__" : 1 00:02:49.540 Fetching value of define "__RDSEED__" : 1 00:02:49.540 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:49.540 Fetching value of define "__znver1__" : (undefined) 00:02:49.540 Fetching value of define "__znver2__" : (undefined) 00:02:49.540 Fetching value of define "__znver3__" : (undefined) 00:02:49.540 Fetching value of define "__znver4__" : (undefined) 00:02:49.540 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:49.540 Message: lib/log: Defining dependency "log" 00:02:49.540 Message: lib/kvargs: Defining dependency "kvargs" 00:02:49.540 Message: lib/telemetry: Defining dependency "telemetry" 00:02:49.540 Checking for function "getentropy" : NO 00:02:49.540 Message: lib/eal: Defining dependency "eal" 00:02:49.540 Message: lib/ring: Defining dependency "ring" 00:02:49.540 Message: lib/rcu: Defining dependency "rcu" 00:02:49.540 Message: lib/mempool: Defining dependency "mempool" 00:02:49.540 Message: lib/mbuf: Defining dependency "mbuf" 00:02:49.540 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:49.540 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:49.540 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:49.540 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:49.540 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:49.540 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:49.540 Compiler for C supports arguments -mpclmul: YES 00:02:49.540 Compiler for C supports arguments -maes: YES 00:02:49.540 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:49.540 Compiler for C supports arguments -mavx512bw: YES 00:02:49.540 Compiler for C supports arguments -mavx512dq: YES 00:02:49.540 Compiler for C supports arguments -mavx512vl: YES 00:02:49.540 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:49.540 Compiler for C supports arguments -mavx2: YES 00:02:49.540 Compiler for C supports arguments -mavx: YES 00:02:49.540 Message: lib/net: Defining dependency "net" 00:02:49.540 Message: lib/meter: Defining dependency "meter" 00:02:49.540 Message: lib/ethdev: Defining dependency "ethdev" 00:02:49.540 Message: lib/pci: Defining dependency "pci" 00:02:49.540 Message: lib/cmdline: Defining dependency "cmdline" 00:02:49.540 Message: lib/hash: Defining dependency "hash" 00:02:49.540 Message: lib/timer: Defining dependency "timer" 00:02:49.540 Message: lib/compressdev: Defining dependency "compressdev" 00:02:49.540 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:49.540 Message: lib/dmadev: Defining dependency "dmadev" 00:02:49.540 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:49.540 Message: lib/power: Defining dependency "power" 00:02:49.540 Message: lib/reorder: Defining dependency "reorder" 00:02:49.540 Message: lib/security: Defining dependency "security" 00:02:49.540 Has header "linux/userfaultfd.h" : YES 00:02:49.540 Has header "linux/vduse.h" : YES 00:02:49.540 Message: lib/vhost: Defining dependency "vhost" 00:02:49.540 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:49.540 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:49.540 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:49.540 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:49.540 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:49.540 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:49.540 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:49.540 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:49.540 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:49.540 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:49.540 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:49.540 Configuring doxy-api-html.conf using configuration 00:02:49.540 Configuring doxy-api-man.conf using configuration 00:02:49.540 Program mandb found: YES (/usr/bin/mandb) 00:02:49.540 Program sphinx-build found: NO 00:02:49.540 Configuring rte_build_config.h using configuration 00:02:49.540 Message: 00:02:49.540 ================= 00:02:49.540 Applications Enabled 00:02:49.540 ================= 00:02:49.540 00:02:49.540 apps: 00:02:49.540 00:02:49.540 00:02:49.540 Message: 00:02:49.540 ================= 00:02:49.540 Libraries Enabled 00:02:49.540 ================= 00:02:49.540 00:02:49.540 libs: 00:02:49.540 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:49.540 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:49.540 cryptodev, dmadev, power, reorder, security, vhost, 00:02:49.540 00:02:49.540 Message: 00:02:49.540 =============== 00:02:49.540 Drivers Enabled 00:02:49.540 =============== 00:02:49.540 00:02:49.540 common: 00:02:49.540 00:02:49.540 bus: 00:02:49.540 pci, vdev, 00:02:49.540 mempool: 00:02:49.540 ring, 00:02:49.540 dma: 00:02:49.540 00:02:49.540 net: 00:02:49.540 00:02:49.540 crypto: 00:02:49.540 00:02:49.540 compress: 00:02:49.540 00:02:49.540 vdpa: 00:02:49.540 00:02:49.540 00:02:49.540 Message: 00:02:49.540 ================= 00:02:49.540 Content Skipped 00:02:49.540 ================= 00:02:49.540 00:02:49.540 apps: 00:02:49.540 dumpcap: explicitly disabled via build config 00:02:49.540 graph: explicitly disabled via build config 00:02:49.540 pdump: explicitly disabled via build config 00:02:49.540 proc-info: explicitly disabled via build config 00:02:49.540 test-acl: explicitly disabled via build config 00:02:49.540 test-bbdev: explicitly disabled via build config 00:02:49.540 test-cmdline: explicitly disabled via build config 00:02:49.540 test-compress-perf: explicitly disabled via build config 00:02:49.540 test-crypto-perf: explicitly disabled via build config 00:02:49.540 test-dma-perf: explicitly disabled via build config 00:02:49.540 test-eventdev: explicitly disabled via build config 00:02:49.540 test-fib: explicitly disabled via build config 00:02:49.540 test-flow-perf: explicitly disabled via build config 00:02:49.540 test-gpudev: explicitly disabled via build config 00:02:49.540 test-mldev: explicitly disabled via build config 00:02:49.540 test-pipeline: explicitly disabled via build config 00:02:49.540 test-pmd: explicitly disabled via build config 00:02:49.540 test-regex: explicitly disabled via build config 00:02:49.540 test-sad: explicitly disabled via build config 00:02:49.540 test-security-perf: explicitly disabled via build config 00:02:49.540 00:02:49.540 libs: 00:02:49.540 argparse: explicitly disabled via build config 00:02:49.540 metrics: explicitly disabled via build config 00:02:49.540 acl: explicitly disabled via build config 00:02:49.540 bbdev: explicitly disabled via build config 00:02:49.540 bitratestats: explicitly disabled via build config 00:02:49.540 bpf: explicitly disabled via build config 00:02:49.540 cfgfile: explicitly disabled via build config 00:02:49.540 distributor: explicitly disabled via build config 00:02:49.540 efd: explicitly disabled via build config 00:02:49.540 eventdev: explicitly disabled via build config 00:02:49.540 dispatcher: explicitly disabled via build config 00:02:49.540 gpudev: explicitly disabled via build config 00:02:49.540 gro: explicitly disabled via build config 00:02:49.540 gso: explicitly disabled via build config 00:02:49.540 ip_frag: explicitly disabled via build config 00:02:49.540 jobstats: explicitly disabled via build config 00:02:49.540 latencystats: explicitly disabled via build config 00:02:49.540 lpm: explicitly disabled via build config 00:02:49.540 member: explicitly disabled via build config 00:02:49.540 pcapng: explicitly disabled via build config 00:02:49.540 rawdev: explicitly disabled via build config 00:02:49.540 regexdev: explicitly disabled via build config 00:02:49.540 mldev: explicitly disabled via build config 00:02:49.540 rib: explicitly disabled via build config 00:02:49.540 sched: explicitly disabled via build config 00:02:49.540 stack: explicitly disabled via build config 00:02:49.540 ipsec: explicitly disabled via build config 00:02:49.540 pdcp: explicitly disabled via build config 00:02:49.540 fib: explicitly disabled via build config 00:02:49.540 port: explicitly disabled via build config 00:02:49.540 pdump: explicitly disabled via build config 00:02:49.540 table: explicitly disabled via build config 00:02:49.540 pipeline: explicitly disabled via build config 00:02:49.540 graph: explicitly disabled via build config 00:02:49.540 node: explicitly disabled via build config 00:02:49.540 00:02:49.540 drivers: 00:02:49.540 common/cpt: not in enabled drivers build config 00:02:49.540 common/dpaax: not in enabled drivers build config 00:02:49.540 common/iavf: not in enabled drivers build config 00:02:49.540 common/idpf: not in enabled drivers build config 00:02:49.540 common/ionic: not in enabled drivers build config 00:02:49.540 common/mvep: not in enabled drivers build config 00:02:49.540 common/octeontx: not in enabled drivers build config 00:02:49.540 bus/auxiliary: not in enabled drivers build config 00:02:49.540 bus/cdx: not in enabled drivers build config 00:02:49.540 bus/dpaa: not in enabled drivers build config 00:02:49.540 bus/fslmc: not in enabled drivers build config 00:02:49.540 bus/ifpga: not in enabled drivers build config 00:02:49.540 bus/platform: not in enabled drivers build config 00:02:49.540 bus/uacce: not in enabled drivers build config 00:02:49.540 bus/vmbus: not in enabled drivers build config 00:02:49.540 common/cnxk: not in enabled drivers build config 00:02:49.540 common/mlx5: not in enabled drivers build config 00:02:49.540 common/nfp: not in enabled drivers build config 00:02:49.540 common/nitrox: not in enabled drivers build config 00:02:49.540 common/qat: not in enabled drivers build config 00:02:49.540 common/sfc_efx: not in enabled drivers build config 00:02:49.540 mempool/bucket: not in enabled drivers build config 00:02:49.541 mempool/cnxk: not in enabled drivers build config 00:02:49.541 mempool/dpaa: not in enabled drivers build config 00:02:49.541 mempool/dpaa2: not in enabled drivers build config 00:02:49.541 mempool/octeontx: not in enabled drivers build config 00:02:49.541 mempool/stack: not in enabled drivers build config 00:02:49.541 dma/cnxk: not in enabled drivers build config 00:02:49.541 dma/dpaa: not in enabled drivers build config 00:02:49.541 dma/dpaa2: not in enabled drivers build config 00:02:49.541 dma/hisilicon: not in enabled drivers build config 00:02:49.541 dma/idxd: not in enabled drivers build config 00:02:49.541 dma/ioat: not in enabled drivers build config 00:02:49.541 dma/skeleton: not in enabled drivers build config 00:02:49.541 net/af_packet: not in enabled drivers build config 00:02:49.541 net/af_xdp: not in enabled drivers build config 00:02:49.541 net/ark: not in enabled drivers build config 00:02:49.541 net/atlantic: not in enabled drivers build config 00:02:49.541 net/avp: not in enabled drivers build config 00:02:49.541 net/axgbe: not in enabled drivers build config 00:02:49.541 net/bnx2x: not in enabled drivers build config 00:02:49.541 net/bnxt: not in enabled drivers build config 00:02:49.541 net/bonding: not in enabled drivers build config 00:02:49.541 net/cnxk: not in enabled drivers build config 00:02:49.541 net/cpfl: not in enabled drivers build config 00:02:49.541 net/cxgbe: not in enabled drivers build config 00:02:49.541 net/dpaa: not in enabled drivers build config 00:02:49.541 net/dpaa2: not in enabled drivers build config 00:02:49.541 net/e1000: not in enabled drivers build config 00:02:49.541 net/ena: not in enabled drivers build config 00:02:49.541 net/enetc: not in enabled drivers build config 00:02:49.541 net/enetfec: not in enabled drivers build config 00:02:49.541 net/enic: not in enabled drivers build config 00:02:49.541 net/failsafe: not in enabled drivers build config 00:02:49.541 net/fm10k: not in enabled drivers build config 00:02:49.541 net/gve: not in enabled drivers build config 00:02:49.541 net/hinic: not in enabled drivers build config 00:02:49.541 net/hns3: not in enabled drivers build config 00:02:49.541 net/i40e: not in enabled drivers build config 00:02:49.541 net/iavf: not in enabled drivers build config 00:02:49.541 net/ice: not in enabled drivers build config 00:02:49.541 net/idpf: not in enabled drivers build config 00:02:49.541 net/igc: not in enabled drivers build config 00:02:49.541 net/ionic: not in enabled drivers build config 00:02:49.541 net/ipn3ke: not in enabled drivers build config 00:02:49.541 net/ixgbe: not in enabled drivers build config 00:02:49.541 net/mana: not in enabled drivers build config 00:02:49.541 net/memif: not in enabled drivers build config 00:02:49.541 net/mlx4: not in enabled drivers build config 00:02:49.541 net/mlx5: not in enabled drivers build config 00:02:49.541 net/mvneta: not in enabled drivers build config 00:02:49.541 net/mvpp2: not in enabled drivers build config 00:02:49.541 net/netvsc: not in enabled drivers build config 00:02:49.541 net/nfb: not in enabled drivers build config 00:02:49.541 net/nfp: not in enabled drivers build config 00:02:49.541 net/ngbe: not in enabled drivers build config 00:02:49.541 net/null: not in enabled drivers build config 00:02:49.541 net/octeontx: not in enabled drivers build config 00:02:49.541 net/octeon_ep: not in enabled drivers build config 00:02:49.541 net/pcap: not in enabled drivers build config 00:02:49.541 net/pfe: not in enabled drivers build config 00:02:49.541 net/qede: not in enabled drivers build config 00:02:49.541 net/ring: not in enabled drivers build config 00:02:49.541 net/sfc: not in enabled drivers build config 00:02:49.541 net/softnic: not in enabled drivers build config 00:02:49.541 net/tap: not in enabled drivers build config 00:02:49.541 net/thunderx: not in enabled drivers build config 00:02:49.541 net/txgbe: not in enabled drivers build config 00:02:49.541 net/vdev_netvsc: not in enabled drivers build config 00:02:49.541 net/vhost: not in enabled drivers build config 00:02:49.541 net/virtio: not in enabled drivers build config 00:02:49.541 net/vmxnet3: not in enabled drivers build config 00:02:49.541 raw/*: missing internal dependency, "rawdev" 00:02:49.541 crypto/armv8: not in enabled drivers build config 00:02:49.541 crypto/bcmfs: not in enabled drivers build config 00:02:49.541 crypto/caam_jr: not in enabled drivers build config 00:02:49.541 crypto/ccp: not in enabled drivers build config 00:02:49.541 crypto/cnxk: not in enabled drivers build config 00:02:49.541 crypto/dpaa_sec: not in enabled drivers build config 00:02:49.541 crypto/dpaa2_sec: not in enabled drivers build config 00:02:49.541 crypto/ipsec_mb: not in enabled drivers build config 00:02:49.541 crypto/mlx5: not in enabled drivers build config 00:02:49.541 crypto/mvsam: not in enabled drivers build config 00:02:49.541 crypto/nitrox: not in enabled drivers build config 00:02:49.541 crypto/null: not in enabled drivers build config 00:02:49.541 crypto/octeontx: not in enabled drivers build config 00:02:49.541 crypto/openssl: not in enabled drivers build config 00:02:49.541 crypto/scheduler: not in enabled drivers build config 00:02:49.541 crypto/uadk: not in enabled drivers build config 00:02:49.541 crypto/virtio: not in enabled drivers build config 00:02:49.541 compress/isal: not in enabled drivers build config 00:02:49.541 compress/mlx5: not in enabled drivers build config 00:02:49.541 compress/nitrox: not in enabled drivers build config 00:02:49.541 compress/octeontx: not in enabled drivers build config 00:02:49.541 compress/zlib: not in enabled drivers build config 00:02:49.541 regex/*: missing internal dependency, "regexdev" 00:02:49.541 ml/*: missing internal dependency, "mldev" 00:02:49.541 vdpa/ifc: not in enabled drivers build config 00:02:49.541 vdpa/mlx5: not in enabled drivers build config 00:02:49.541 vdpa/nfp: not in enabled drivers build config 00:02:49.541 vdpa/sfc: not in enabled drivers build config 00:02:49.541 event/*: missing internal dependency, "eventdev" 00:02:49.541 baseband/*: missing internal dependency, "bbdev" 00:02:49.541 gpu/*: missing internal dependency, "gpudev" 00:02:49.541 00:02:49.541 00:02:49.541 Build targets in project: 85 00:02:49.541 00:02:49.541 DPDK 24.03.0 00:02:49.541 00:02:49.541 User defined options 00:02:49.541 buildtype : debug 00:02:49.541 default_library : shared 00:02:49.541 libdir : lib 00:02:49.541 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:49.541 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:49.541 c_link_args : 00:02:49.541 cpu_instruction_set: native 00:02:49.541 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:49.541 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:49.541 enable_docs : false 00:02:49.541 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:49.541 enable_kmods : false 00:02:49.541 max_lcores : 128 00:02:49.541 tests : false 00:02:49.541 00:02:49.541 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:50.116 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:50.116 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:50.116 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:50.116 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:50.116 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:50.116 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:50.116 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:50.116 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:50.116 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:50.116 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:50.116 [10/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:50.116 [11/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:50.116 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:50.116 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:50.116 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:50.116 [15/268] Linking static target lib/librte_kvargs.a 00:02:50.116 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:50.116 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:50.375 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:50.375 [19/268] Linking static target lib/librte_log.a 00:02:50.375 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:50.375 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:50.375 [22/268] Linking static target lib/librte_pci.a 00:02:50.375 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:50.375 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:50.646 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:50.646 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:50.646 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:50.646 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:50.646 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:50.646 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:50.646 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:50.646 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:50.646 [33/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:50.646 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:50.646 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:50.646 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:50.646 [37/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:50.646 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:50.646 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:50.646 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:50.646 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:50.646 [42/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:50.646 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:50.646 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:50.646 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:50.646 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:50.646 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:50.646 [48/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:50.646 [49/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:50.646 [50/268] Linking static target lib/librte_meter.a 00:02:50.646 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:50.646 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:50.646 [53/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:50.646 [54/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:50.646 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:50.646 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:50.646 [57/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:50.646 [58/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:50.646 [59/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:50.646 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:50.646 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:50.646 [62/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:50.646 [63/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:50.646 [64/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:50.646 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:50.646 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:50.646 [67/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:50.646 [68/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:50.646 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:50.646 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:50.646 [71/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:50.646 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:50.646 [73/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:50.646 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:50.646 [75/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:50.905 [76/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:50.905 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:50.905 [78/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:50.905 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:50.905 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:50.905 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:50.905 [82/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:50.905 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:50.905 [84/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:50.905 [85/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:50.905 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:50.905 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:50.905 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:50.905 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:50.905 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:50.905 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:50.905 [92/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.905 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:50.905 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:50.905 [95/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:50.905 [96/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.905 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:50.905 [98/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:50.905 [99/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:50.905 [100/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:50.905 [101/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:50.905 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:50.905 [103/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:50.905 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:50.905 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:50.905 [106/268] Linking static target lib/librte_ring.a 00:02:50.905 [107/268] Linking static target lib/librte_rcu.a 00:02:50.905 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:50.905 [109/268] Linking static target lib/librte_telemetry.a 00:02:50.905 [110/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:50.905 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:50.905 [112/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:50.905 [113/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:50.905 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:50.905 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:50.905 [116/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:50.905 [117/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:50.905 [118/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:50.905 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:50.905 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:50.905 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:50.905 [122/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:50.905 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:50.905 [124/268] Linking static target lib/librte_mempool.a 00:02:50.905 [125/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:50.905 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:50.905 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:50.905 [128/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.905 [129/268] Linking static target lib/librte_cmdline.a 00:02:50.905 [130/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:50.905 [131/268] Linking static target lib/librte_eal.a 00:02:50.905 [132/268] Linking static target lib/librte_net.a 00:02:50.905 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:50.905 [134/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:51.165 [135/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.165 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:51.165 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:51.165 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:51.165 [139/268] Linking target lib/librte_log.so.24.1 00:02:51.165 [140/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:51.165 [141/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:51.165 [142/268] Linking static target lib/librte_timer.a 00:02:51.165 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:51.165 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:51.165 [145/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.165 [146/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:51.165 [147/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:51.165 [148/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.165 [149/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:51.165 [150/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:51.165 [151/268] Linking static target lib/librte_mbuf.a 00:02:51.165 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:51.165 [153/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:51.165 [154/268] Linking static target lib/librte_dmadev.a 00:02:51.165 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:51.165 [156/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:51.165 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:51.165 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:51.165 [159/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:51.165 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:51.165 [161/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:51.165 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:51.165 [163/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:51.165 [164/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:51.165 [165/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.165 [166/268] Linking static target lib/librte_reorder.a 00:02:51.165 [167/268] Linking target lib/librte_kvargs.so.24.1 00:02:51.165 [168/268] Linking static target lib/librte_compressdev.a 00:02:51.165 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:51.165 [170/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:51.165 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:51.165 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:51.165 [173/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:51.165 [174/268] Linking target lib/librte_telemetry.so.24.1 00:02:51.165 [175/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.165 [176/268] Linking static target lib/librte_security.a 00:02:51.425 [177/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:51.425 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:51.425 [179/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:51.425 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:51.425 [181/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:51.425 [182/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:51.425 [183/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:51.425 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:51.425 [185/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:51.425 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:51.425 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:51.425 [188/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:51.425 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:51.425 [190/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:51.425 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:51.425 [192/268] Linking static target lib/librte_power.a 00:02:51.425 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:51.425 [194/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:51.425 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:51.425 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:51.425 [197/268] Linking static target lib/librte_hash.a 00:02:51.425 [198/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.425 [199/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:51.425 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:51.684 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:51.684 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.684 [203/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.684 [204/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.684 [205/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.684 [206/268] Linking static target drivers/librte_mempool_ring.a 00:02:51.684 [207/268] Linking static target drivers/librte_bus_vdev.a 00:02:51.684 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:51.684 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.684 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.684 [211/268] Linking static target drivers/librte_bus_pci.a 00:02:51.684 [212/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.684 [213/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.684 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:51.684 [215/268] Linking static target lib/librte_cryptodev.a 00:02:51.944 [216/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.944 [217/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.944 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.944 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:51.944 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.944 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.944 [222/268] Linking static target lib/librte_ethdev.a 00:02:51.944 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.203 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:52.203 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.462 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.462 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.399 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:53.399 [229/268] Linking static target lib/librte_vhost.a 00:02:53.657 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.032 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.476 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.056 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.056 [234/268] Linking target lib/librte_eal.so.24.1 00:03:01.056 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:01.056 [236/268] Linking target lib/librte_ring.so.24.1 00:03:01.056 [237/268] Linking target lib/librte_pci.so.24.1 00:03:01.056 [238/268] Linking target lib/librte_meter.so.24.1 00:03:01.056 [239/268] Linking target lib/librte_timer.so.24.1 00:03:01.056 [240/268] Linking target lib/librte_dmadev.so.24.1 00:03:01.056 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:01.315 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:01.315 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:01.315 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:01.315 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:01.315 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:01.315 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:01.315 [248/268] Linking target lib/librte_mempool.so.24.1 00:03:01.315 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:01.315 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:01.315 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:01.575 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:01.575 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:01.575 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:01.575 [255/268] Linking target lib/librte_reorder.so.24.1 00:03:01.575 [256/268] Linking target lib/librte_net.so.24.1 00:03:01.575 [257/268] Linking target lib/librte_compressdev.so.24.1 00:03:01.575 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:01.834 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:01.834 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:01.834 [261/268] Linking target lib/librte_hash.so.24.1 00:03:01.834 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:01.834 [263/268] Linking target lib/librte_security.so.24.1 00:03:01.834 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:02.093 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:02.093 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:02.093 [267/268] Linking target lib/librte_power.so.24.1 00:03:02.093 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:02.093 INFO: autodetecting backend as ninja 00:03:02.093 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:03:12.071 CC lib/ut/ut.o 00:03:12.071 CC lib/ut_mock/mock.o 00:03:12.071 CC lib/log/log.o 00:03:12.071 CC lib/log/log_flags.o 00:03:12.071 CC lib/log/log_deprecated.o 00:03:12.330 LIB libspdk_ut_mock.a 00:03:12.330 LIB libspdk_ut.a 00:03:12.330 LIB libspdk_log.a 00:03:12.330 SO libspdk_ut_mock.so.6.0 00:03:12.330 SO libspdk_ut.so.2.0 00:03:12.330 SO libspdk_log.so.7.1 00:03:12.330 SYMLINK libspdk_ut_mock.so 00:03:12.330 SYMLINK libspdk_ut.so 00:03:12.330 SYMLINK libspdk_log.so 00:03:12.898 CC lib/ioat/ioat.o 00:03:12.898 CXX lib/trace_parser/trace.o 00:03:12.898 CC lib/util/base64.o 00:03:12.898 CC lib/dma/dma.o 00:03:12.898 CC lib/util/bit_array.o 00:03:12.898 CC lib/util/cpuset.o 00:03:12.898 CC lib/util/crc16.o 00:03:12.898 CC lib/util/crc32.o 00:03:12.898 CC lib/util/crc32c.o 00:03:12.898 CC lib/util/crc32_ieee.o 00:03:12.898 CC lib/util/crc64.o 00:03:12.898 CC lib/util/dif.o 00:03:12.898 CC lib/util/fd.o 00:03:12.898 CC lib/util/fd_group.o 00:03:12.898 CC lib/util/file.o 00:03:12.898 CC lib/util/hexlify.o 00:03:12.898 CC lib/util/iov.o 00:03:12.898 CC lib/util/math.o 00:03:12.898 CC lib/util/net.o 00:03:12.898 CC lib/util/pipe.o 00:03:12.898 CC lib/util/strerror_tls.o 00:03:12.898 CC lib/util/string.o 00:03:12.898 CC lib/util/uuid.o 00:03:12.898 CC lib/util/xor.o 00:03:12.898 CC lib/util/zipf.o 00:03:12.898 CC lib/util/md5.o 00:03:12.898 CC lib/vfio_user/host/vfio_user.o 00:03:12.898 CC lib/vfio_user/host/vfio_user_pci.o 00:03:13.157 LIB libspdk_dma.a 00:03:13.157 SO libspdk_dma.so.5.0 00:03:13.157 LIB libspdk_ioat.a 00:03:13.157 SO libspdk_ioat.so.7.0 00:03:13.157 SYMLINK libspdk_dma.so 00:03:13.157 SYMLINK libspdk_ioat.so 00:03:13.157 LIB libspdk_vfio_user.a 00:03:13.157 SO libspdk_vfio_user.so.5.0 00:03:13.157 LIB libspdk_util.a 00:03:13.416 SYMLINK libspdk_vfio_user.so 00:03:13.416 SO libspdk_util.so.10.1 00:03:13.416 SYMLINK libspdk_util.so 00:03:13.416 LIB libspdk_trace_parser.a 00:03:13.674 SO libspdk_trace_parser.so.6.0 00:03:13.675 SYMLINK libspdk_trace_parser.so 00:03:13.675 CC lib/rdma_utils/rdma_utils.o 00:03:13.675 CC lib/vmd/vmd.o 00:03:13.675 CC lib/vmd/led.o 00:03:13.675 CC lib/conf/conf.o 00:03:13.933 CC lib/json/json_parse.o 00:03:13.933 CC lib/json/json_util.o 00:03:13.933 CC lib/idxd/idxd.o 00:03:13.933 CC lib/json/json_write.o 00:03:13.933 CC lib/env_dpdk/env.o 00:03:13.933 CC lib/idxd/idxd_user.o 00:03:13.933 CC lib/env_dpdk/memory.o 00:03:13.933 CC lib/idxd/idxd_kernel.o 00:03:13.933 CC lib/env_dpdk/pci.o 00:03:13.933 CC lib/env_dpdk/init.o 00:03:13.933 CC lib/env_dpdk/threads.o 00:03:13.933 CC lib/env_dpdk/pci_ioat.o 00:03:13.933 CC lib/env_dpdk/pci_virtio.o 00:03:13.933 CC lib/env_dpdk/pci_vmd.o 00:03:13.933 CC lib/env_dpdk/pci_idxd.o 00:03:13.933 CC lib/env_dpdk/pci_event.o 00:03:13.933 CC lib/env_dpdk/sigbus_handler.o 00:03:13.933 CC lib/env_dpdk/pci_dpdk.o 00:03:13.933 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:13.933 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:13.933 LIB libspdk_conf.a 00:03:14.191 LIB libspdk_rdma_utils.a 00:03:14.191 SO libspdk_conf.so.6.0 00:03:14.191 LIB libspdk_json.a 00:03:14.191 SO libspdk_rdma_utils.so.1.0 00:03:14.191 SYMLINK libspdk_conf.so 00:03:14.191 SO libspdk_json.so.6.0 00:03:14.191 SYMLINK libspdk_rdma_utils.so 00:03:14.191 SYMLINK libspdk_json.so 00:03:14.191 LIB libspdk_idxd.a 00:03:14.449 SO libspdk_idxd.so.12.1 00:03:14.449 LIB libspdk_vmd.a 00:03:14.449 SO libspdk_vmd.so.6.0 00:03:14.449 SYMLINK libspdk_idxd.so 00:03:14.449 SYMLINK libspdk_vmd.so 00:03:14.449 CC lib/rdma_provider/common.o 00:03:14.449 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:14.449 CC lib/jsonrpc/jsonrpc_server.o 00:03:14.449 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:14.449 CC lib/jsonrpc/jsonrpc_client.o 00:03:14.449 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:14.707 LIB libspdk_rdma_provider.a 00:03:14.707 SO libspdk_rdma_provider.so.7.0 00:03:14.707 LIB libspdk_jsonrpc.a 00:03:14.707 SYMLINK libspdk_rdma_provider.so 00:03:14.707 SO libspdk_jsonrpc.so.6.0 00:03:14.965 SYMLINK libspdk_jsonrpc.so 00:03:14.965 LIB libspdk_env_dpdk.a 00:03:14.965 SO libspdk_env_dpdk.so.15.1 00:03:14.965 SYMLINK libspdk_env_dpdk.so 00:03:15.224 CC lib/rpc/rpc.o 00:03:15.483 LIB libspdk_rpc.a 00:03:15.483 SO libspdk_rpc.so.6.0 00:03:15.483 SYMLINK libspdk_rpc.so 00:03:15.742 CC lib/notify/notify.o 00:03:15.742 CC lib/trace/trace.o 00:03:15.742 CC lib/notify/notify_rpc.o 00:03:15.742 CC lib/trace/trace_flags.o 00:03:15.742 CC lib/trace/trace_rpc.o 00:03:15.742 CC lib/keyring/keyring.o 00:03:15.742 CC lib/keyring/keyring_rpc.o 00:03:16.000 LIB libspdk_notify.a 00:03:16.000 SO libspdk_notify.so.6.0 00:03:16.000 LIB libspdk_keyring.a 00:03:16.000 LIB libspdk_trace.a 00:03:16.000 SYMLINK libspdk_notify.so 00:03:16.000 SO libspdk_keyring.so.2.0 00:03:16.000 SO libspdk_trace.so.11.0 00:03:16.000 SYMLINK libspdk_keyring.so 00:03:16.258 SYMLINK libspdk_trace.so 00:03:16.516 CC lib/thread/thread.o 00:03:16.516 CC lib/thread/iobuf.o 00:03:16.516 CC lib/sock/sock.o 00:03:16.516 CC lib/sock/sock_rpc.o 00:03:16.775 LIB libspdk_sock.a 00:03:16.775 SO libspdk_sock.so.10.0 00:03:16.775 SYMLINK libspdk_sock.so 00:03:17.341 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:17.341 CC lib/nvme/nvme_ctrlr.o 00:03:17.341 CC lib/nvme/nvme_fabric.o 00:03:17.341 CC lib/nvme/nvme_ns_cmd.o 00:03:17.341 CC lib/nvme/nvme_ns.o 00:03:17.341 CC lib/nvme/nvme_pcie_common.o 00:03:17.341 CC lib/nvme/nvme_pcie.o 00:03:17.341 CC lib/nvme/nvme_qpair.o 00:03:17.341 CC lib/nvme/nvme.o 00:03:17.341 CC lib/nvme/nvme_quirks.o 00:03:17.341 CC lib/nvme/nvme_transport.o 00:03:17.341 CC lib/nvme/nvme_discovery.o 00:03:17.341 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:17.341 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:17.341 CC lib/nvme/nvme_tcp.o 00:03:17.341 CC lib/nvme/nvme_opal.o 00:03:17.341 CC lib/nvme/nvme_io_msg.o 00:03:17.341 CC lib/nvme/nvme_poll_group.o 00:03:17.341 CC lib/nvme/nvme_zns.o 00:03:17.341 CC lib/nvme/nvme_stubs.o 00:03:17.341 CC lib/nvme/nvme_auth.o 00:03:17.341 CC lib/nvme/nvme_cuse.o 00:03:17.341 CC lib/nvme/nvme_vfio_user.o 00:03:17.341 CC lib/nvme/nvme_rdma.o 00:03:17.600 LIB libspdk_thread.a 00:03:17.600 SO libspdk_thread.so.11.0 00:03:17.600 SYMLINK libspdk_thread.so 00:03:18.167 CC lib/vfu_tgt/tgt_endpoint.o 00:03:18.167 CC lib/vfu_tgt/tgt_rpc.o 00:03:18.167 CC lib/init/json_config.o 00:03:18.167 CC lib/init/subsystem.o 00:03:18.167 CC lib/init/subsystem_rpc.o 00:03:18.167 CC lib/init/rpc.o 00:03:18.167 CC lib/blob/blobstore.o 00:03:18.167 CC lib/blob/request.o 00:03:18.167 CC lib/blob/zeroes.o 00:03:18.167 CC lib/blob/blob_bs_dev.o 00:03:18.167 CC lib/accel/accel.o 00:03:18.167 CC lib/accel/accel_rpc.o 00:03:18.167 CC lib/accel/accel_sw.o 00:03:18.167 CC lib/fsdev/fsdev.o 00:03:18.167 CC lib/fsdev/fsdev_io.o 00:03:18.167 CC lib/fsdev/fsdev_rpc.o 00:03:18.167 CC lib/virtio/virtio.o 00:03:18.167 CC lib/virtio/virtio_vhost_user.o 00:03:18.167 CC lib/virtio/virtio_vfio_user.o 00:03:18.167 CC lib/virtio/virtio_pci.o 00:03:18.167 LIB libspdk_init.a 00:03:18.167 SO libspdk_init.so.6.0 00:03:18.425 LIB libspdk_vfu_tgt.a 00:03:18.425 LIB libspdk_virtio.a 00:03:18.425 SO libspdk_virtio.so.7.0 00:03:18.425 SO libspdk_vfu_tgt.so.3.0 00:03:18.425 SYMLINK libspdk_init.so 00:03:18.425 SYMLINK libspdk_vfu_tgt.so 00:03:18.425 SYMLINK libspdk_virtio.so 00:03:18.683 LIB libspdk_fsdev.a 00:03:18.683 SO libspdk_fsdev.so.2.0 00:03:18.683 SYMLINK libspdk_fsdev.so 00:03:18.683 CC lib/event/app.o 00:03:18.683 CC lib/event/reactor.o 00:03:18.683 CC lib/event/log_rpc.o 00:03:18.683 CC lib/event/app_rpc.o 00:03:18.683 CC lib/event/scheduler_static.o 00:03:18.942 LIB libspdk_accel.a 00:03:18.942 SO libspdk_accel.so.16.0 00:03:18.942 LIB libspdk_nvme.a 00:03:18.942 SYMLINK libspdk_accel.so 00:03:18.942 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:18.942 SO libspdk_nvme.so.15.0 00:03:18.942 LIB libspdk_event.a 00:03:19.200 SO libspdk_event.so.14.0 00:03:19.200 SYMLINK libspdk_event.so 00:03:19.200 SYMLINK libspdk_nvme.so 00:03:19.200 CC lib/bdev/bdev.o 00:03:19.200 CC lib/bdev/bdev_rpc.o 00:03:19.200 CC lib/bdev/bdev_zone.o 00:03:19.200 CC lib/bdev/part.o 00:03:19.200 CC lib/bdev/scsi_nvme.o 00:03:19.458 LIB libspdk_fuse_dispatcher.a 00:03:19.458 SO libspdk_fuse_dispatcher.so.1.0 00:03:19.458 SYMLINK libspdk_fuse_dispatcher.so 00:03:20.392 LIB libspdk_blob.a 00:03:20.392 SO libspdk_blob.so.12.0 00:03:20.392 SYMLINK libspdk_blob.so 00:03:20.651 CC lib/lvol/lvol.o 00:03:20.651 CC lib/blobfs/blobfs.o 00:03:20.651 CC lib/blobfs/tree.o 00:03:21.219 LIB libspdk_bdev.a 00:03:21.219 SO libspdk_bdev.so.17.0 00:03:21.219 LIB libspdk_blobfs.a 00:03:21.219 SO libspdk_blobfs.so.11.0 00:03:21.219 SYMLINK libspdk_bdev.so 00:03:21.478 LIB libspdk_lvol.a 00:03:21.478 SYMLINK libspdk_blobfs.so 00:03:21.478 SO libspdk_lvol.so.11.0 00:03:21.478 SYMLINK libspdk_lvol.so 00:03:21.738 CC lib/ublk/ublk.o 00:03:21.738 CC lib/nvmf/ctrlr.o 00:03:21.738 CC lib/ublk/ublk_rpc.o 00:03:21.738 CC lib/nvmf/ctrlr_discovery.o 00:03:21.738 CC lib/nvmf/ctrlr_bdev.o 00:03:21.738 CC lib/nvmf/subsystem.o 00:03:21.738 CC lib/nvmf/nvmf.o 00:03:21.738 CC lib/nvmf/nvmf_rpc.o 00:03:21.738 CC lib/nbd/nbd.o 00:03:21.738 CC lib/nvmf/transport.o 00:03:21.738 CC lib/nvmf/tcp.o 00:03:21.738 CC lib/nbd/nbd_rpc.o 00:03:21.738 CC lib/nvmf/stubs.o 00:03:21.738 CC lib/scsi/dev.o 00:03:21.738 CC lib/nvmf/mdns_server.o 00:03:21.738 CC lib/ftl/ftl_core.o 00:03:21.738 CC lib/scsi/lun.o 00:03:21.738 CC lib/nvmf/vfio_user.o 00:03:21.738 CC lib/nvmf/rdma.o 00:03:21.738 CC lib/scsi/port.o 00:03:21.738 CC lib/ftl/ftl_init.o 00:03:21.738 CC lib/nvmf/auth.o 00:03:21.738 CC lib/scsi/scsi.o 00:03:21.738 CC lib/ftl/ftl_layout.o 00:03:21.738 CC lib/scsi/scsi_bdev.o 00:03:21.738 CC lib/ftl/ftl_debug.o 00:03:21.738 CC lib/scsi/scsi_pr.o 00:03:21.738 CC lib/ftl/ftl_io.o 00:03:21.738 CC lib/ftl/ftl_sb.o 00:03:21.738 CC lib/ftl/ftl_l2p.o 00:03:21.738 CC lib/scsi/scsi_rpc.o 00:03:21.738 CC lib/scsi/task.o 00:03:21.738 CC lib/ftl/ftl_l2p_flat.o 00:03:21.738 CC lib/ftl/ftl_nv_cache.o 00:03:21.738 CC lib/ftl/ftl_band.o 00:03:21.738 CC lib/ftl/ftl_band_ops.o 00:03:21.738 CC lib/ftl/ftl_writer.o 00:03:21.738 CC lib/ftl/ftl_rq.o 00:03:21.738 CC lib/ftl/ftl_reloc.o 00:03:21.738 CC lib/ftl/ftl_p2l.o 00:03:21.738 CC lib/ftl/ftl_p2l_log.o 00:03:21.738 CC lib/ftl/ftl_l2p_cache.o 00:03:21.738 CC lib/ftl/mngt/ftl_mngt.o 00:03:21.738 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:21.738 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:21.738 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:21.738 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:21.738 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:21.738 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:21.738 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:21.738 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:21.738 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:21.738 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:21.738 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:21.738 CC lib/ftl/utils/ftl_conf.o 00:03:21.738 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:21.738 CC lib/ftl/utils/ftl_md.o 00:03:21.738 CC lib/ftl/utils/ftl_mempool.o 00:03:21.738 CC lib/ftl/utils/ftl_property.o 00:03:21.738 CC lib/ftl/utils/ftl_bitmap.o 00:03:21.738 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:21.738 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:21.738 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:21.738 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:21.738 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:21.738 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:21.738 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:21.738 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:21.738 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:21.738 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:21.738 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:21.738 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:21.738 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:21.738 CC lib/ftl/base/ftl_base_bdev.o 00:03:21.738 CC lib/ftl/base/ftl_base_dev.o 00:03:21.738 CC lib/ftl/ftl_trace.o 00:03:22.307 LIB libspdk_nbd.a 00:03:22.307 SO libspdk_nbd.so.7.0 00:03:22.307 LIB libspdk_scsi.a 00:03:22.307 SO libspdk_scsi.so.9.0 00:03:22.307 SYMLINK libspdk_nbd.so 00:03:22.307 SYMLINK libspdk_scsi.so 00:03:22.566 LIB libspdk_ublk.a 00:03:22.566 SO libspdk_ublk.so.3.0 00:03:22.566 SYMLINK libspdk_ublk.so 00:03:22.840 CC lib/vhost/vhost_rpc.o 00:03:22.840 CC lib/vhost/vhost.o 00:03:22.840 CC lib/vhost/vhost_scsi.o 00:03:22.840 CC lib/vhost/vhost_blk.o 00:03:22.840 CC lib/vhost/rte_vhost_user.o 00:03:22.840 LIB libspdk_ftl.a 00:03:22.840 CC lib/iscsi/init_grp.o 00:03:22.840 CC lib/iscsi/conn.o 00:03:22.840 CC lib/iscsi/iscsi.o 00:03:22.840 CC lib/iscsi/param.o 00:03:22.840 CC lib/iscsi/portal_grp.o 00:03:22.840 CC lib/iscsi/tgt_node.o 00:03:22.840 CC lib/iscsi/iscsi_subsystem.o 00:03:22.840 CC lib/iscsi/iscsi_rpc.o 00:03:22.840 CC lib/iscsi/task.o 00:03:22.840 SO libspdk_ftl.so.9.0 00:03:23.098 SYMLINK libspdk_ftl.so 00:03:23.666 LIB libspdk_nvmf.a 00:03:23.666 LIB libspdk_vhost.a 00:03:23.666 SO libspdk_vhost.so.8.0 00:03:23.666 SO libspdk_nvmf.so.20.0 00:03:23.666 SYMLINK libspdk_vhost.so 00:03:23.666 SYMLINK libspdk_nvmf.so 00:03:23.666 LIB libspdk_iscsi.a 00:03:23.927 SO libspdk_iscsi.so.8.0 00:03:23.927 SYMLINK libspdk_iscsi.so 00:03:24.499 CC module/env_dpdk/env_dpdk_rpc.o 00:03:24.499 CC module/vfu_device/vfu_virtio.o 00:03:24.499 CC module/vfu_device/vfu_virtio_blk.o 00:03:24.499 CC module/vfu_device/vfu_virtio_scsi.o 00:03:24.499 CC module/vfu_device/vfu_virtio_rpc.o 00:03:24.499 CC module/vfu_device/vfu_virtio_fs.o 00:03:24.499 CC module/blob/bdev/blob_bdev.o 00:03:24.499 LIB libspdk_env_dpdk_rpc.a 00:03:24.758 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:24.758 CC module/accel/ioat/accel_ioat.o 00:03:24.758 CC module/accel/ioat/accel_ioat_rpc.o 00:03:24.758 CC module/fsdev/aio/fsdev_aio.o 00:03:24.758 CC module/keyring/file/keyring.o 00:03:24.758 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:24.758 CC module/keyring/file/keyring_rpc.o 00:03:24.758 CC module/fsdev/aio/linux_aio_mgr.o 00:03:24.758 CC module/accel/dsa/accel_dsa.o 00:03:24.758 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:24.758 CC module/accel/dsa/accel_dsa_rpc.o 00:03:24.758 CC module/sock/posix/posix.o 00:03:24.758 CC module/accel/iaa/accel_iaa.o 00:03:24.758 CC module/accel/iaa/accel_iaa_rpc.o 00:03:24.758 CC module/keyring/linux/keyring.o 00:03:24.758 CC module/scheduler/gscheduler/gscheduler.o 00:03:24.758 CC module/keyring/linux/keyring_rpc.o 00:03:24.758 CC module/accel/error/accel_error.o 00:03:24.758 CC module/accel/error/accel_error_rpc.o 00:03:24.758 SO libspdk_env_dpdk_rpc.so.6.0 00:03:24.758 SYMLINK libspdk_env_dpdk_rpc.so 00:03:24.758 LIB libspdk_keyring_linux.a 00:03:24.758 LIB libspdk_keyring_file.a 00:03:24.758 LIB libspdk_scheduler_dpdk_governor.a 00:03:24.758 LIB libspdk_scheduler_gscheduler.a 00:03:24.758 SO libspdk_keyring_linux.so.1.0 00:03:24.758 LIB libspdk_accel_ioat.a 00:03:24.758 LIB libspdk_scheduler_dynamic.a 00:03:24.758 SO libspdk_keyring_file.so.2.0 00:03:24.758 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:24.758 LIB libspdk_accel_error.a 00:03:24.758 LIB libspdk_accel_iaa.a 00:03:24.758 SO libspdk_scheduler_gscheduler.so.4.0 00:03:24.758 SO libspdk_scheduler_dynamic.so.4.0 00:03:25.016 SO libspdk_accel_ioat.so.6.0 00:03:25.016 LIB libspdk_blob_bdev.a 00:03:25.016 SO libspdk_accel_iaa.so.3.0 00:03:25.016 SO libspdk_accel_error.so.2.0 00:03:25.016 SYMLINK libspdk_keyring_linux.so 00:03:25.016 SO libspdk_blob_bdev.so.12.0 00:03:25.016 SYMLINK libspdk_keyring_file.so 00:03:25.016 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:25.016 LIB libspdk_accel_dsa.a 00:03:25.016 SYMLINK libspdk_scheduler_dynamic.so 00:03:25.016 SYMLINK libspdk_scheduler_gscheduler.so 00:03:25.016 SYMLINK libspdk_accel_ioat.so 00:03:25.016 SYMLINK libspdk_accel_error.so 00:03:25.016 SO libspdk_accel_dsa.so.5.0 00:03:25.016 SYMLINK libspdk_blob_bdev.so 00:03:25.016 SYMLINK libspdk_accel_iaa.so 00:03:25.016 LIB libspdk_vfu_device.a 00:03:25.016 SYMLINK libspdk_accel_dsa.so 00:03:25.016 SO libspdk_vfu_device.so.3.0 00:03:25.016 SYMLINK libspdk_vfu_device.so 00:03:25.274 LIB libspdk_fsdev_aio.a 00:03:25.274 SO libspdk_fsdev_aio.so.1.0 00:03:25.274 LIB libspdk_sock_posix.a 00:03:25.274 SYMLINK libspdk_fsdev_aio.so 00:03:25.274 SO libspdk_sock_posix.so.6.0 00:03:25.533 SYMLINK libspdk_sock_posix.so 00:03:25.533 CC module/bdev/lvol/vbdev_lvol.o 00:03:25.533 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:25.533 CC module/bdev/ftl/bdev_ftl.o 00:03:25.533 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:25.533 CC module/bdev/nvme/bdev_nvme.o 00:03:25.533 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:25.533 CC module/bdev/nvme/nvme_rpc.o 00:03:25.533 CC module/bdev/delay/vbdev_delay.o 00:03:25.533 CC module/bdev/nvme/bdev_mdns_client.o 00:03:25.533 CC module/bdev/nvme/vbdev_opal.o 00:03:25.533 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:25.533 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:25.533 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:25.533 CC module/bdev/gpt/gpt.o 00:03:25.533 CC module/bdev/gpt/vbdev_gpt.o 00:03:25.533 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:25.533 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:25.533 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:25.533 CC module/bdev/error/vbdev_error.o 00:03:25.533 CC module/bdev/raid/bdev_raid.o 00:03:25.533 CC module/bdev/raid/bdev_raid_rpc.o 00:03:25.533 CC module/bdev/error/vbdev_error_rpc.o 00:03:25.533 CC module/bdev/raid/bdev_raid_sb.o 00:03:25.533 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:25.533 CC module/bdev/raid/raid0.o 00:03:25.533 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:25.533 CC module/bdev/raid/raid1.o 00:03:25.533 CC module/bdev/split/vbdev_split.o 00:03:25.533 CC module/bdev/raid/concat.o 00:03:25.533 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:25.533 CC module/bdev/malloc/bdev_malloc.o 00:03:25.533 CC module/bdev/split/vbdev_split_rpc.o 00:03:25.533 CC module/bdev/aio/bdev_aio.o 00:03:25.533 CC module/bdev/null/bdev_null.o 00:03:25.533 CC module/bdev/aio/bdev_aio_rpc.o 00:03:25.533 CC module/blobfs/bdev/blobfs_bdev.o 00:03:25.533 CC module/bdev/null/bdev_null_rpc.o 00:03:25.533 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:25.533 CC module/bdev/passthru/vbdev_passthru.o 00:03:25.533 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:25.533 CC module/bdev/iscsi/bdev_iscsi.o 00:03:25.533 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:25.792 LIB libspdk_blobfs_bdev.a 00:03:25.792 LIB libspdk_bdev_ftl.a 00:03:25.792 SO libspdk_blobfs_bdev.so.6.0 00:03:25.792 LIB libspdk_bdev_null.a 00:03:25.792 LIB libspdk_bdev_split.a 00:03:25.792 SO libspdk_bdev_ftl.so.6.0 00:03:25.792 LIB libspdk_bdev_error.a 00:03:25.792 SO libspdk_bdev_null.so.6.0 00:03:25.792 SO libspdk_bdev_split.so.6.0 00:03:25.792 LIB libspdk_bdev_gpt.a 00:03:25.792 SYMLINK libspdk_blobfs_bdev.so 00:03:25.792 LIB libspdk_bdev_zone_block.a 00:03:25.792 SO libspdk_bdev_error.so.6.0 00:03:25.792 LIB libspdk_bdev_delay.a 00:03:25.792 LIB libspdk_bdev_passthru.a 00:03:25.792 SYMLINK libspdk_bdev_ftl.so 00:03:25.792 SYMLINK libspdk_bdev_null.so 00:03:25.792 SO libspdk_bdev_gpt.so.6.0 00:03:25.792 LIB libspdk_bdev_aio.a 00:03:25.792 SO libspdk_bdev_zone_block.so.6.0 00:03:25.792 SO libspdk_bdev_delay.so.6.0 00:03:26.051 SO libspdk_bdev_passthru.so.6.0 00:03:26.051 SYMLINK libspdk_bdev_split.so 00:03:26.051 LIB libspdk_bdev_iscsi.a 00:03:26.051 LIB libspdk_bdev_malloc.a 00:03:26.051 SO libspdk_bdev_aio.so.6.0 00:03:26.051 SYMLINK libspdk_bdev_error.so 00:03:26.051 SO libspdk_bdev_iscsi.so.6.0 00:03:26.051 SYMLINK libspdk_bdev_gpt.so 00:03:26.051 SO libspdk_bdev_malloc.so.6.0 00:03:26.051 SYMLINK libspdk_bdev_zone_block.so 00:03:26.051 LIB libspdk_bdev_lvol.a 00:03:26.051 SYMLINK libspdk_bdev_delay.so 00:03:26.051 SYMLINK libspdk_bdev_passthru.so 00:03:26.051 SYMLINK libspdk_bdev_aio.so 00:03:26.051 LIB libspdk_bdev_virtio.a 00:03:26.051 SO libspdk_bdev_lvol.so.6.0 00:03:26.051 SYMLINK libspdk_bdev_iscsi.so 00:03:26.051 SYMLINK libspdk_bdev_malloc.so 00:03:26.051 SO libspdk_bdev_virtio.so.6.0 00:03:26.051 SYMLINK libspdk_bdev_lvol.so 00:03:26.051 SYMLINK libspdk_bdev_virtio.so 00:03:26.310 LIB libspdk_bdev_raid.a 00:03:26.310 SO libspdk_bdev_raid.so.6.0 00:03:26.569 SYMLINK libspdk_bdev_raid.so 00:03:27.513 LIB libspdk_bdev_nvme.a 00:03:27.513 SO libspdk_bdev_nvme.so.7.1 00:03:27.513 SYMLINK libspdk_bdev_nvme.so 00:03:28.082 CC module/event/subsystems/iobuf/iobuf.o 00:03:28.082 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:28.082 CC module/event/subsystems/sock/sock.o 00:03:28.082 CC module/event/subsystems/keyring/keyring.o 00:03:28.082 CC module/event/subsystems/vmd/vmd.o 00:03:28.340 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:28.340 CC module/event/subsystems/fsdev/fsdev.o 00:03:28.340 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:28.340 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:28.340 CC module/event/subsystems/scheduler/scheduler.o 00:03:28.340 LIB libspdk_event_keyring.a 00:03:28.340 LIB libspdk_event_vhost_blk.a 00:03:28.340 LIB libspdk_event_fsdev.a 00:03:28.340 LIB libspdk_event_scheduler.a 00:03:28.340 LIB libspdk_event_sock.a 00:03:28.340 SO libspdk_event_keyring.so.1.0 00:03:28.340 LIB libspdk_event_vmd.a 00:03:28.340 LIB libspdk_event_iobuf.a 00:03:28.340 LIB libspdk_event_vfu_tgt.a 00:03:28.340 SO libspdk_event_vhost_blk.so.3.0 00:03:28.340 SO libspdk_event_fsdev.so.1.0 00:03:28.340 SO libspdk_event_scheduler.so.4.0 00:03:28.340 SO libspdk_event_sock.so.5.0 00:03:28.340 SO libspdk_event_vmd.so.6.0 00:03:28.340 SO libspdk_event_iobuf.so.3.0 00:03:28.340 SO libspdk_event_vfu_tgt.so.3.0 00:03:28.340 SYMLINK libspdk_event_keyring.so 00:03:28.340 SYMLINK libspdk_event_vhost_blk.so 00:03:28.340 SYMLINK libspdk_event_scheduler.so 00:03:28.340 SYMLINK libspdk_event_fsdev.so 00:03:28.340 SYMLINK libspdk_event_vmd.so 00:03:28.340 SYMLINK libspdk_event_sock.so 00:03:28.340 SYMLINK libspdk_event_iobuf.so 00:03:28.599 SYMLINK libspdk_event_vfu_tgt.so 00:03:28.858 CC module/event/subsystems/accel/accel.o 00:03:28.858 LIB libspdk_event_accel.a 00:03:28.858 SO libspdk_event_accel.so.6.0 00:03:29.117 SYMLINK libspdk_event_accel.so 00:03:29.376 CC module/event/subsystems/bdev/bdev.o 00:03:29.635 LIB libspdk_event_bdev.a 00:03:29.635 SO libspdk_event_bdev.so.6.0 00:03:29.635 SYMLINK libspdk_event_bdev.so 00:03:29.894 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:29.894 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:29.894 CC module/event/subsystems/scsi/scsi.o 00:03:29.894 CC module/event/subsystems/ublk/ublk.o 00:03:29.894 CC module/event/subsystems/nbd/nbd.o 00:03:30.153 LIB libspdk_event_nbd.a 00:03:30.153 LIB libspdk_event_ublk.a 00:03:30.153 LIB libspdk_event_scsi.a 00:03:30.153 SO libspdk_event_nbd.so.6.0 00:03:30.153 SO libspdk_event_scsi.so.6.0 00:03:30.153 SO libspdk_event_ublk.so.3.0 00:03:30.153 LIB libspdk_event_nvmf.a 00:03:30.153 SYMLINK libspdk_event_nbd.so 00:03:30.153 SO libspdk_event_nvmf.so.6.0 00:03:30.153 SYMLINK libspdk_event_scsi.so 00:03:30.153 SYMLINK libspdk_event_ublk.so 00:03:30.153 SYMLINK libspdk_event_nvmf.so 00:03:30.721 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:30.721 CC module/event/subsystems/iscsi/iscsi.o 00:03:30.721 LIB libspdk_event_vhost_scsi.a 00:03:30.721 LIB libspdk_event_iscsi.a 00:03:30.721 SO libspdk_event_vhost_scsi.so.3.0 00:03:30.721 SO libspdk_event_iscsi.so.6.0 00:03:30.721 SYMLINK libspdk_event_vhost_scsi.so 00:03:30.721 SYMLINK libspdk_event_iscsi.so 00:03:30.980 SO libspdk.so.6.0 00:03:30.980 SYMLINK libspdk.so 00:03:31.239 CXX app/trace/trace.o 00:03:31.239 CC app/spdk_nvme_identify/identify.o 00:03:31.239 CC app/spdk_nvme_discover/discovery_aer.o 00:03:31.239 CC app/spdk_top/spdk_top.o 00:03:31.239 CC app/spdk_nvme_perf/perf.o 00:03:31.239 CC app/spdk_lspci/spdk_lspci.o 00:03:31.506 CC app/trace_record/trace_record.o 00:03:31.506 TEST_HEADER include/spdk/accel.h 00:03:31.506 TEST_HEADER include/spdk/accel_module.h 00:03:31.506 CC test/rpc_client/rpc_client_test.o 00:03:31.506 TEST_HEADER include/spdk/assert.h 00:03:31.506 TEST_HEADER include/spdk/barrier.h 00:03:31.506 TEST_HEADER include/spdk/base64.h 00:03:31.506 TEST_HEADER include/spdk/bdev.h 00:03:31.506 TEST_HEADER include/spdk/bdev_zone.h 00:03:31.506 TEST_HEADER include/spdk/bit_array.h 00:03:31.506 TEST_HEADER include/spdk/bdev_module.h 00:03:31.506 TEST_HEADER include/spdk/bit_pool.h 00:03:31.506 TEST_HEADER include/spdk/blob_bdev.h 00:03:31.506 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:31.506 TEST_HEADER include/spdk/blobfs.h 00:03:31.506 TEST_HEADER include/spdk/conf.h 00:03:31.506 TEST_HEADER include/spdk/blob.h 00:03:31.506 TEST_HEADER include/spdk/config.h 00:03:31.506 TEST_HEADER include/spdk/cpuset.h 00:03:31.506 TEST_HEADER include/spdk/crc16.h 00:03:31.506 TEST_HEADER include/spdk/crc64.h 00:03:31.506 TEST_HEADER include/spdk/dif.h 00:03:31.506 TEST_HEADER include/spdk/crc32.h 00:03:31.506 CC app/spdk_dd/spdk_dd.o 00:03:31.506 TEST_HEADER include/spdk/dma.h 00:03:31.506 CC app/nvmf_tgt/nvmf_main.o 00:03:31.506 TEST_HEADER include/spdk/endian.h 00:03:31.506 TEST_HEADER include/spdk/env.h 00:03:31.506 TEST_HEADER include/spdk/event.h 00:03:31.506 TEST_HEADER include/spdk/env_dpdk.h 00:03:31.506 TEST_HEADER include/spdk/fd_group.h 00:03:31.506 TEST_HEADER include/spdk/fd.h 00:03:31.506 TEST_HEADER include/spdk/file.h 00:03:31.506 TEST_HEADER include/spdk/fsdev.h 00:03:31.506 TEST_HEADER include/spdk/fsdev_module.h 00:03:31.506 TEST_HEADER include/spdk/gpt_spec.h 00:03:31.506 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:31.506 TEST_HEADER include/spdk/hexlify.h 00:03:31.506 TEST_HEADER include/spdk/ftl.h 00:03:31.506 TEST_HEADER include/spdk/histogram_data.h 00:03:31.506 TEST_HEADER include/spdk/idxd.h 00:03:31.506 TEST_HEADER include/spdk/idxd_spec.h 00:03:31.506 TEST_HEADER include/spdk/init.h 00:03:31.506 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:31.506 TEST_HEADER include/spdk/ioat.h 00:03:31.506 TEST_HEADER include/spdk/ioat_spec.h 00:03:31.506 TEST_HEADER include/spdk/iscsi_spec.h 00:03:31.506 TEST_HEADER include/spdk/json.h 00:03:31.506 TEST_HEADER include/spdk/keyring.h 00:03:31.506 TEST_HEADER include/spdk/jsonrpc.h 00:03:31.506 TEST_HEADER include/spdk/keyring_module.h 00:03:31.506 TEST_HEADER include/spdk/log.h 00:03:31.506 TEST_HEADER include/spdk/md5.h 00:03:31.506 TEST_HEADER include/spdk/likely.h 00:03:31.506 TEST_HEADER include/spdk/memory.h 00:03:31.506 TEST_HEADER include/spdk/lvol.h 00:03:31.506 TEST_HEADER include/spdk/nbd.h 00:03:31.506 TEST_HEADER include/spdk/mmio.h 00:03:31.506 TEST_HEADER include/spdk/net.h 00:03:31.506 TEST_HEADER include/spdk/nvme.h 00:03:31.506 TEST_HEADER include/spdk/notify.h 00:03:31.506 TEST_HEADER include/spdk/nvme_intel.h 00:03:31.506 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:31.506 CC app/iscsi_tgt/iscsi_tgt.o 00:03:31.506 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:31.506 TEST_HEADER include/spdk/nvme_zns.h 00:03:31.506 TEST_HEADER include/spdk/nvme_spec.h 00:03:31.506 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:31.506 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:31.506 TEST_HEADER include/spdk/nvmf.h 00:03:31.506 TEST_HEADER include/spdk/nvmf_spec.h 00:03:31.506 TEST_HEADER include/spdk/nvmf_transport.h 00:03:31.506 TEST_HEADER include/spdk/opal_spec.h 00:03:31.506 TEST_HEADER include/spdk/pci_ids.h 00:03:31.506 TEST_HEADER include/spdk/opal.h 00:03:31.506 TEST_HEADER include/spdk/pipe.h 00:03:31.506 TEST_HEADER include/spdk/queue.h 00:03:31.506 TEST_HEADER include/spdk/rpc.h 00:03:31.506 TEST_HEADER include/spdk/reduce.h 00:03:31.506 TEST_HEADER include/spdk/scheduler.h 00:03:31.506 TEST_HEADER include/spdk/scsi.h 00:03:31.506 TEST_HEADER include/spdk/scsi_spec.h 00:03:31.506 TEST_HEADER include/spdk/sock.h 00:03:31.506 TEST_HEADER include/spdk/stdinc.h 00:03:31.506 TEST_HEADER include/spdk/thread.h 00:03:31.506 TEST_HEADER include/spdk/string.h 00:03:31.506 TEST_HEADER include/spdk/trace.h 00:03:31.506 TEST_HEADER include/spdk/trace_parser.h 00:03:31.506 TEST_HEADER include/spdk/ublk.h 00:03:31.506 TEST_HEADER include/spdk/tree.h 00:03:31.506 TEST_HEADER include/spdk/util.h 00:03:31.506 TEST_HEADER include/spdk/uuid.h 00:03:31.506 CC app/spdk_tgt/spdk_tgt.o 00:03:31.506 TEST_HEADER include/spdk/version.h 00:03:31.506 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:31.506 TEST_HEADER include/spdk/vmd.h 00:03:31.506 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:31.506 TEST_HEADER include/spdk/vhost.h 00:03:31.506 TEST_HEADER include/spdk/zipf.h 00:03:31.506 TEST_HEADER include/spdk/xor.h 00:03:31.506 CXX test/cpp_headers/accel.o 00:03:31.506 CXX test/cpp_headers/accel_module.o 00:03:31.506 CXX test/cpp_headers/assert.o 00:03:31.506 CXX test/cpp_headers/barrier.o 00:03:31.506 CXX test/cpp_headers/bdev.o 00:03:31.506 CXX test/cpp_headers/base64.o 00:03:31.506 CXX test/cpp_headers/bit_array.o 00:03:31.506 CXX test/cpp_headers/bdev_zone.o 00:03:31.506 CXX test/cpp_headers/bdev_module.o 00:03:31.506 CXX test/cpp_headers/bit_pool.o 00:03:31.506 CXX test/cpp_headers/blob_bdev.o 00:03:31.506 CXX test/cpp_headers/blobfs_bdev.o 00:03:31.506 CXX test/cpp_headers/blob.o 00:03:31.506 CXX test/cpp_headers/blobfs.o 00:03:31.506 CXX test/cpp_headers/conf.o 00:03:31.506 CXX test/cpp_headers/cpuset.o 00:03:31.506 CXX test/cpp_headers/config.o 00:03:31.506 CXX test/cpp_headers/crc16.o 00:03:31.506 CXX test/cpp_headers/dif.o 00:03:31.506 CXX test/cpp_headers/dma.o 00:03:31.506 CXX test/cpp_headers/crc32.o 00:03:31.506 CXX test/cpp_headers/crc64.o 00:03:31.506 CXX test/cpp_headers/endian.o 00:03:31.506 CXX test/cpp_headers/env.o 00:03:31.506 CXX test/cpp_headers/fd_group.o 00:03:31.506 CXX test/cpp_headers/event.o 00:03:31.506 CXX test/cpp_headers/fd.o 00:03:31.506 CXX test/cpp_headers/env_dpdk.o 00:03:31.506 CXX test/cpp_headers/file.o 00:03:31.506 CXX test/cpp_headers/fsdev.o 00:03:31.506 CXX test/cpp_headers/fsdev_module.o 00:03:31.506 CXX test/cpp_headers/fuse_dispatcher.o 00:03:31.506 CXX test/cpp_headers/ftl.o 00:03:31.506 CXX test/cpp_headers/gpt_spec.o 00:03:31.506 CXX test/cpp_headers/hexlify.o 00:03:31.506 CXX test/cpp_headers/histogram_data.o 00:03:31.506 CXX test/cpp_headers/idxd_spec.o 00:03:31.506 CXX test/cpp_headers/idxd.o 00:03:31.506 CXX test/cpp_headers/init.o 00:03:31.506 CXX test/cpp_headers/ioat.o 00:03:31.506 CXX test/cpp_headers/ioat_spec.o 00:03:31.506 CXX test/cpp_headers/iscsi_spec.o 00:03:31.506 CXX test/cpp_headers/json.o 00:03:31.506 CXX test/cpp_headers/jsonrpc.o 00:03:31.506 CXX test/cpp_headers/keyring_module.o 00:03:31.506 CXX test/cpp_headers/likely.o 00:03:31.506 CXX test/cpp_headers/keyring.o 00:03:31.506 CXX test/cpp_headers/log.o 00:03:31.506 CXX test/cpp_headers/lvol.o 00:03:31.506 CXX test/cpp_headers/memory.o 00:03:31.506 CXX test/cpp_headers/mmio.o 00:03:31.506 CXX test/cpp_headers/md5.o 00:03:31.506 CXX test/cpp_headers/nbd.o 00:03:31.506 CXX test/cpp_headers/net.o 00:03:31.506 CXX test/cpp_headers/notify.o 00:03:31.506 CXX test/cpp_headers/nvme.o 00:03:31.506 CXX test/cpp_headers/nvme_intel.o 00:03:31.506 CXX test/cpp_headers/nvme_ocssd.o 00:03:31.506 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:31.506 CXX test/cpp_headers/nvme_spec.o 00:03:31.506 CXX test/cpp_headers/nvme_zns.o 00:03:31.506 CXX test/cpp_headers/nvmf_cmd.o 00:03:31.506 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:31.506 CXX test/cpp_headers/nvmf.o 00:03:31.506 CC examples/util/zipf/zipf.o 00:03:31.506 CXX test/cpp_headers/nvmf_spec.o 00:03:31.506 CXX test/cpp_headers/nvmf_transport.o 00:03:31.506 CXX test/cpp_headers/opal.o 00:03:31.506 CC test/app/jsoncat/jsoncat.o 00:03:31.506 CXX test/cpp_headers/opal_spec.o 00:03:31.506 CC examples/ioat/perf/perf.o 00:03:31.506 CC test/app/histogram_perf/histogram_perf.o 00:03:31.778 CC examples/ioat/verify/verify.o 00:03:31.778 CC test/app/stub/stub.o 00:03:31.778 CC test/env/pci/pci_ut.o 00:03:31.778 CC test/env/memory/memory_ut.o 00:03:31.778 CC test/thread/poller_perf/poller_perf.o 00:03:31.778 CC test/env/vtophys/vtophys.o 00:03:31.778 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:31.778 CC app/fio/nvme/fio_plugin.o 00:03:31.778 CC test/app/bdev_svc/bdev_svc.o 00:03:31.778 CC test/dma/test_dma/test_dma.o 00:03:31.778 CC app/fio/bdev/fio_plugin.o 00:03:31.778 LINK spdk_lspci 00:03:32.040 LINK interrupt_tgt 00:03:32.040 LINK rpc_client_test 00:03:32.040 LINK spdk_nvme_discover 00:03:32.040 LINK nvmf_tgt 00:03:32.040 LINK spdk_trace_record 00:03:32.040 CC test/env/mem_callbacks/mem_callbacks.o 00:03:32.040 LINK histogram_perf 00:03:32.303 LINK poller_perf 00:03:32.303 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:32.303 LINK iscsi_tgt 00:03:32.303 CXX test/cpp_headers/pci_ids.o 00:03:32.303 CXX test/cpp_headers/pipe.o 00:03:32.303 CXX test/cpp_headers/queue.o 00:03:32.303 CXX test/cpp_headers/reduce.o 00:03:32.303 LINK zipf 00:03:32.303 CXX test/cpp_headers/rpc.o 00:03:32.303 CXX test/cpp_headers/scheduler.o 00:03:32.303 CXX test/cpp_headers/scsi.o 00:03:32.303 LINK jsoncat 00:03:32.303 CXX test/cpp_headers/scsi_spec.o 00:03:32.303 CXX test/cpp_headers/sock.o 00:03:32.303 CXX test/cpp_headers/string.o 00:03:32.303 CXX test/cpp_headers/stdinc.o 00:03:32.303 CXX test/cpp_headers/thread.o 00:03:32.303 CXX test/cpp_headers/trace.o 00:03:32.303 CXX test/cpp_headers/trace_parser.o 00:03:32.303 CXX test/cpp_headers/tree.o 00:03:32.303 CXX test/cpp_headers/ublk.o 00:03:32.303 CXX test/cpp_headers/uuid.o 00:03:32.303 CXX test/cpp_headers/util.o 00:03:32.303 CXX test/cpp_headers/version.o 00:03:32.303 CXX test/cpp_headers/vfio_user_pci.o 00:03:32.303 CXX test/cpp_headers/vfio_user_spec.o 00:03:32.303 CXX test/cpp_headers/vhost.o 00:03:32.303 CXX test/cpp_headers/vmd.o 00:03:32.303 CXX test/cpp_headers/xor.o 00:03:32.303 LINK vtophys 00:03:32.303 CXX test/cpp_headers/zipf.o 00:03:32.303 LINK spdk_dd 00:03:32.303 LINK spdk_tgt 00:03:32.303 LINK ioat_perf 00:03:32.303 LINK env_dpdk_post_init 00:03:32.303 LINK stub 00:03:32.303 LINK bdev_svc 00:03:32.303 LINK verify 00:03:32.303 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:32.303 LINK spdk_trace 00:03:32.303 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:32.303 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:32.561 LINK spdk_nvme 00:03:32.561 LINK spdk_bdev 00:03:32.561 LINK pci_ut 00:03:32.561 LINK test_dma 00:03:32.819 LINK nvme_fuzz 00:03:32.819 CC test/event/reactor_perf/reactor_perf.o 00:03:32.819 CC test/event/reactor/reactor.o 00:03:32.819 CC test/event/event_perf/event_perf.o 00:03:32.820 CC test/event/app_repeat/app_repeat.o 00:03:32.820 CC test/event/scheduler/scheduler.o 00:03:32.820 CC examples/idxd/perf/perf.o 00:03:32.820 CC examples/vmd/lsvmd/lsvmd.o 00:03:32.820 CC examples/sock/hello_world/hello_sock.o 00:03:32.820 LINK spdk_nvme_perf 00:03:32.820 CC examples/vmd/led/led.o 00:03:32.820 CC examples/thread/thread/thread_ex.o 00:03:32.820 LINK spdk_nvme_identify 00:03:32.820 LINK spdk_top 00:03:32.820 CC app/vhost/vhost.o 00:03:32.820 LINK vhost_fuzz 00:03:32.820 LINK event_perf 00:03:32.820 LINK reactor_perf 00:03:32.820 LINK reactor 00:03:32.820 LINK lsvmd 00:03:32.820 LINK mem_callbacks 00:03:32.820 LINK app_repeat 00:03:33.078 LINK led 00:03:33.078 LINK scheduler 00:03:33.078 LINK hello_sock 00:03:33.078 LINK vhost 00:03:33.078 LINK idxd_perf 00:03:33.078 LINK thread 00:03:33.078 CC test/nvme/e2edp/nvme_dp.o 00:03:33.078 LINK memory_ut 00:03:33.078 CC test/nvme/err_injection/err_injection.o 00:03:33.078 CC test/nvme/boot_partition/boot_partition.o 00:03:33.078 CC test/nvme/startup/startup.o 00:03:33.078 CC test/nvme/cuse/cuse.o 00:03:33.078 CC test/nvme/compliance/nvme_compliance.o 00:03:33.078 CC test/nvme/aer/aer.o 00:03:33.078 CC test/nvme/reset/reset.o 00:03:33.078 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:33.078 CC test/nvme/simple_copy/simple_copy.o 00:03:33.078 CC test/nvme/fused_ordering/fused_ordering.o 00:03:33.078 CC test/nvme/overhead/overhead.o 00:03:33.078 CC test/nvme/reserve/reserve.o 00:03:33.078 CC test/nvme/fdp/fdp.o 00:03:33.078 CC test/nvme/connect_stress/connect_stress.o 00:03:33.078 CC test/nvme/sgl/sgl.o 00:03:33.078 CC test/accel/dif/dif.o 00:03:33.078 CC test/blobfs/mkfs/mkfs.o 00:03:33.336 CC test/lvol/esnap/esnap.o 00:03:33.336 LINK boot_partition 00:03:33.336 LINK startup 00:03:33.336 LINK err_injection 00:03:33.336 LINK connect_stress 00:03:33.336 LINK doorbell_aers 00:03:33.336 LINK reserve 00:03:33.336 LINK fused_ordering 00:03:33.336 LINK simple_copy 00:03:33.336 LINK nvme_dp 00:03:33.336 LINK sgl 00:03:33.336 LINK reset 00:03:33.336 LINK aer 00:03:33.336 LINK mkfs 00:03:33.336 LINK overhead 00:03:33.595 LINK fdp 00:03:33.595 LINK nvme_compliance 00:03:33.595 CC examples/nvme/hello_world/hello_world.o 00:03:33.595 CC examples/nvme/hotplug/hotplug.o 00:03:33.595 CC examples/nvme/abort/abort.o 00:03:33.595 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:33.595 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:33.595 CC examples/nvme/reconnect/reconnect.o 00:03:33.595 CC examples/nvme/arbitration/arbitration.o 00:03:33.595 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:33.595 CC examples/accel/perf/accel_perf.o 00:03:33.595 CC examples/blob/hello_world/hello_blob.o 00:03:33.595 CC examples/blob/cli/blobcli.o 00:03:33.595 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:33.595 LINK cmb_copy 00:03:33.595 LINK pmr_persistence 00:03:33.595 LINK hello_world 00:03:33.854 LINK hotplug 00:03:33.854 LINK dif 00:03:33.854 LINK arbitration 00:03:33.854 LINK reconnect 00:03:33.854 LINK abort 00:03:33.854 LINK iscsi_fuzz 00:03:33.854 LINK hello_blob 00:03:33.854 LINK hello_fsdev 00:03:33.854 LINK nvme_manage 00:03:34.113 LINK accel_perf 00:03:34.113 LINK blobcli 00:03:34.113 LINK cuse 00:03:34.371 CC test/bdev/bdevio/bdevio.o 00:03:34.629 CC examples/bdev/hello_world/hello_bdev.o 00:03:34.629 CC examples/bdev/bdevperf/bdevperf.o 00:03:34.629 LINK bdevio 00:03:34.629 LINK hello_bdev 00:03:35.197 LINK bdevperf 00:03:35.764 CC examples/nvmf/nvmf/nvmf.o 00:03:36.023 LINK nvmf 00:03:36.960 LINK esnap 00:03:37.219 00:03:37.219 real 0m56.296s 00:03:37.219 user 8m22.522s 00:03:37.219 sys 3m54.972s 00:03:37.219 14:05:37 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:37.219 14:05:37 make -- common/autotest_common.sh@10 -- $ set +x 00:03:37.219 ************************************ 00:03:37.219 END TEST make 00:03:37.219 ************************************ 00:03:37.219 14:05:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:37.219 14:05:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:37.219 14:05:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:37.219 14:05:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.219 14:05:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:37.219 14:05:37 -- pm/common@44 -- $ pid=1354936 00:03:37.219 14:05:37 -- pm/common@50 -- $ kill -TERM 1354936 00:03:37.219 14:05:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.219 14:05:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:37.219 14:05:37 -- pm/common@44 -- $ pid=1354937 00:03:37.219 14:05:37 -- pm/common@50 -- $ kill -TERM 1354937 00:03:37.219 14:05:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.219 14:05:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:37.219 14:05:37 -- pm/common@44 -- $ pid=1354939 00:03:37.219 14:05:37 -- pm/common@50 -- $ kill -TERM 1354939 00:03:37.219 14:05:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.219 14:05:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:37.219 14:05:37 -- pm/common@44 -- $ pid=1354962 00:03:37.219 14:05:37 -- pm/common@50 -- $ sudo -E kill -TERM 1354962 00:03:37.219 14:05:37 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:37.219 14:05:37 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:37.219 14:05:37 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:37.219 14:05:37 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:37.219 14:05:37 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:37.479 14:05:37 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:37.479 14:05:37 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:37.479 14:05:37 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:37.479 14:05:37 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:37.479 14:05:37 -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.479 14:05:37 -- scripts/common.sh@336 -- # read -ra ver1 00:03:37.479 14:05:37 -- scripts/common.sh@337 -- # IFS=.-: 00:03:37.479 14:05:37 -- scripts/common.sh@337 -- # read -ra ver2 00:03:37.479 14:05:37 -- scripts/common.sh@338 -- # local 'op=<' 00:03:37.479 14:05:37 -- scripts/common.sh@340 -- # ver1_l=2 00:03:37.479 14:05:37 -- scripts/common.sh@341 -- # ver2_l=1 00:03:37.479 14:05:37 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:37.479 14:05:37 -- scripts/common.sh@344 -- # case "$op" in 00:03:37.479 14:05:37 -- scripts/common.sh@345 -- # : 1 00:03:37.479 14:05:37 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:37.479 14:05:37 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.479 14:05:37 -- scripts/common.sh@365 -- # decimal 1 00:03:37.479 14:05:37 -- scripts/common.sh@353 -- # local d=1 00:03:37.479 14:05:37 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.479 14:05:37 -- scripts/common.sh@355 -- # echo 1 00:03:37.479 14:05:37 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:37.479 14:05:37 -- scripts/common.sh@366 -- # decimal 2 00:03:37.479 14:05:37 -- scripts/common.sh@353 -- # local d=2 00:03:37.479 14:05:37 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.479 14:05:37 -- scripts/common.sh@355 -- # echo 2 00:03:37.479 14:05:37 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:37.479 14:05:37 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:37.479 14:05:37 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:37.479 14:05:37 -- scripts/common.sh@368 -- # return 0 00:03:37.479 14:05:37 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.479 14:05:37 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:37.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.479 --rc genhtml_branch_coverage=1 00:03:37.479 --rc genhtml_function_coverage=1 00:03:37.479 --rc genhtml_legend=1 00:03:37.479 --rc geninfo_all_blocks=1 00:03:37.479 --rc geninfo_unexecuted_blocks=1 00:03:37.479 00:03:37.479 ' 00:03:37.479 14:05:37 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:37.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.479 --rc genhtml_branch_coverage=1 00:03:37.479 --rc genhtml_function_coverage=1 00:03:37.479 --rc genhtml_legend=1 00:03:37.479 --rc geninfo_all_blocks=1 00:03:37.479 --rc geninfo_unexecuted_blocks=1 00:03:37.479 00:03:37.479 ' 00:03:37.479 14:05:37 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:37.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.479 --rc genhtml_branch_coverage=1 00:03:37.479 --rc genhtml_function_coverage=1 00:03:37.479 --rc genhtml_legend=1 00:03:37.479 --rc geninfo_all_blocks=1 00:03:37.479 --rc geninfo_unexecuted_blocks=1 00:03:37.479 00:03:37.479 ' 00:03:37.479 14:05:37 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:37.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.479 --rc genhtml_branch_coverage=1 00:03:37.479 --rc genhtml_function_coverage=1 00:03:37.479 --rc genhtml_legend=1 00:03:37.479 --rc geninfo_all_blocks=1 00:03:37.479 --rc geninfo_unexecuted_blocks=1 00:03:37.479 00:03:37.479 ' 00:03:37.479 14:05:37 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:37.479 14:05:37 -- nvmf/common.sh@7 -- # uname -s 00:03:37.479 14:05:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:37.479 14:05:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:37.479 14:05:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:37.479 14:05:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:37.479 14:05:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:37.479 14:05:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:37.479 14:05:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:37.479 14:05:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:37.479 14:05:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:37.479 14:05:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:37.479 14:05:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:03:37.479 14:05:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:03:37.479 14:05:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:37.479 14:05:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:37.479 14:05:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:37.479 14:05:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:37.479 14:05:38 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:37.479 14:05:38 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:37.479 14:05:38 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:37.479 14:05:38 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:37.479 14:05:38 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:37.479 14:05:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.479 14:05:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.479 14:05:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.479 14:05:38 -- paths/export.sh@5 -- # export PATH 00:03:37.479 14:05:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.479 14:05:38 -- nvmf/common.sh@51 -- # : 0 00:03:37.479 14:05:38 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:37.479 14:05:38 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:37.479 14:05:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:37.479 14:05:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:37.479 14:05:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:37.479 14:05:38 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:37.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:37.479 14:05:38 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:37.479 14:05:38 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:37.479 14:05:38 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:37.479 14:05:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:37.479 14:05:38 -- spdk/autotest.sh@32 -- # uname -s 00:03:37.479 14:05:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:37.479 14:05:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:37.479 14:05:38 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:37.479 14:05:38 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:37.479 14:05:38 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:37.479 14:05:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:37.479 14:05:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:37.479 14:05:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:37.479 14:05:38 -- spdk/autotest.sh@48 -- # udevadm_pid=1418464 00:03:37.479 14:05:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:37.479 14:05:38 -- pm/common@17 -- # local monitor 00:03:37.479 14:05:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:37.479 14:05:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.479 14:05:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.479 14:05:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.479 14:05:38 -- pm/common@21 -- # date +%s 00:03:37.479 14:05:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.479 14:05:38 -- pm/common@21 -- # date +%s 00:03:37.479 14:05:38 -- pm/common@25 -- # sleep 1 00:03:37.479 14:05:38 -- pm/common@21 -- # date +%s 00:03:37.480 14:05:38 -- pm/common@21 -- # date +%s 00:03:37.480 14:05:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733835938 00:03:37.480 14:05:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733835938 00:03:37.480 14:05:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733835938 00:03:37.480 14:05:38 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733835938 00:03:37.480 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733835938_collect-cpu-load.pm.log 00:03:37.480 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733835938_collect-vmstat.pm.log 00:03:37.480 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733835938_collect-cpu-temp.pm.log 00:03:37.480 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733835938_collect-bmc-pm.bmc.pm.log 00:03:38.417 14:05:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:38.417 14:05:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:38.417 14:05:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:38.417 14:05:39 -- common/autotest_common.sh@10 -- # set +x 00:03:38.417 14:05:39 -- spdk/autotest.sh@59 -- # create_test_list 00:03:38.417 14:05:39 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:38.417 14:05:39 -- common/autotest_common.sh@10 -- # set +x 00:03:38.417 14:05:39 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:38.417 14:05:39 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:38.417 14:05:39 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:38.417 14:05:39 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:38.417 14:05:39 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:38.417 14:05:39 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:38.417 14:05:39 -- common/autotest_common.sh@1457 -- # uname 00:03:38.417 14:05:39 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:38.417 14:05:39 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:38.417 14:05:39 -- common/autotest_common.sh@1477 -- # uname 00:03:38.417 14:05:39 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:38.417 14:05:39 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:38.417 14:05:39 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:38.676 lcov: LCOV version 1.15 00:03:38.676 14:05:39 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:56.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:56.778 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:03.525 14:06:03 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:03.525 14:06:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.525 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:04:03.525 14:06:03 -- spdk/autotest.sh@78 -- # rm -f 00:04:03.525 14:06:03 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.815 0000:5f:00.0 (1b96 2600): Already using the nvme driver 00:04:06.815 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:06.815 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:06.815 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:06.815 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:06.815 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:06.815 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:06.815 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:06.815 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:06.815 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:06.815 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:06.815 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:06.815 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:06.815 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:06.815 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:06.815 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:06.815 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:06.816 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:07.074 14:06:07 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:07.074 14:06:07 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:07.074 14:06:07 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:07.074 14:06:07 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:07.074 14:06:07 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:07.074 14:06:07 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:07.074 14:06:07 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:07.074 14:06:07 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:07.074 14:06:07 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:07.074 14:06:07 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:07.074 14:06:07 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:07.075 14:06:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:07.075 14:06:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:07.075 14:06:07 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:07.075 14:06:07 -- common/autotest_common.sh@1669 -- # bdf=0000:5f:00.0 00:04:07.075 14:06:07 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:07.075 14:06:07 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:07.075 14:06:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:07.075 14:06:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:07.075 14:06:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:07.075 14:06:07 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:07.075 14:06:07 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:07.075 14:06:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:07.075 14:06:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:07.075 14:06:07 -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:04:07.075 14:06:07 -- common/autotest_common.sh@1672 -- # zoned_ctrls["$nvme"]=0000:5f:00.0 00:04:07.075 14:06:07 -- common/autotest_common.sh@1673 -- # continue 2 00:04:07.075 14:06:07 -- common/autotest_common.sh@1678 -- # for nvme in "${!zoned_ctrls[@]}" 00:04:07.075 14:06:07 -- common/autotest_common.sh@1679 -- # for ns in "$nvme/"nvme*n* 00:04:07.075 14:06:07 -- common/autotest_common.sh@1680 -- # zoned_devs["${ns##*/}"]=0000:5f:00.0 00:04:07.075 14:06:07 -- common/autotest_common.sh@1679 -- # for ns in "$nvme/"nvme*n* 00:04:07.075 14:06:07 -- common/autotest_common.sh@1680 -- # zoned_devs["${ns##*/}"]=0000:5f:00.0 00:04:07.075 14:06:07 -- spdk/autotest.sh@85 -- # (( 2 > 0 )) 00:04:07.075 14:06:07 -- spdk/autotest.sh@90 -- # export 'PCI_BLOCKED=0000:5f:00.0 0000:5f:00.0' 00:04:07.075 14:06:07 -- spdk/autotest.sh@90 -- # PCI_BLOCKED='0000:5f:00.0 0000:5f:00.0' 00:04:07.075 14:06:07 -- spdk/autotest.sh@91 -- # export 'PCI_ZONED=0000:5f:00.0 0000:5f:00.0' 00:04:07.075 14:06:07 -- spdk/autotest.sh@91 -- # PCI_ZONED='0000:5f:00.0 0000:5f:00.0' 00:04:07.075 14:06:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:07.075 14:06:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:07.075 14:06:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:07.075 14:06:07 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:07.075 14:06:07 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:07.075 No valid GPT data, bailing 00:04:07.075 14:06:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:07.075 14:06:07 -- scripts/common.sh@394 -- # pt= 00:04:07.075 14:06:07 -- scripts/common.sh@395 -- # return 1 00:04:07.075 14:06:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:07.075 1+0 records in 00:04:07.075 1+0 records out 00:04:07.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00389535 s, 269 MB/s 00:04:07.075 14:06:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:07.075 14:06:07 -- spdk/autotest.sh@99 -- # [[ -z 0000:5f:00.0 ]] 00:04:07.075 14:06:07 -- spdk/autotest.sh@99 -- # continue 00:04:07.075 14:06:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:07.075 14:06:07 -- spdk/autotest.sh@99 -- # [[ -z 0000:5f:00.0 ]] 00:04:07.075 14:06:07 -- spdk/autotest.sh@99 -- # continue 00:04:07.075 14:06:07 -- spdk/autotest.sh@105 -- # sync 00:04:07.075 14:06:07 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:07.075 14:06:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:07.075 14:06:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:12.350 14:06:13 -- spdk/autotest.sh@111 -- # uname -s 00:04:12.350 14:06:13 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:12.350 14:06:13 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:12.350 14:06:13 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:15.647 Hugepages 00:04:15.647 node hugesize free / total 00:04:15.647 node0 1048576kB 0 / 0 00:04:15.647 node0 2048kB 0 / 0 00:04:15.647 node1 1048576kB 0 / 0 00:04:15.906 node1 2048kB 0 / 0 00:04:15.906 00:04:15.906 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.906 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:15.906 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:15.906 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:15.906 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:15.906 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:15.906 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:15.906 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:15.906 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:15.906 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:15.906 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:04:15.906 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:15.906 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:15.906 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:15.906 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:15.906 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:15.906 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:15.906 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:15.906 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:15.906 14:06:16 -- spdk/autotest.sh@117 -- # uname -s 00:04:16.166 14:06:16 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:16.166 14:06:16 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:16.166 14:06:16 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:19.456 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:19.456 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:19.456 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:19.456 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:19.456 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:19.456 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:19.456 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:19.456 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:19.456 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:19.456 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:19.456 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:19.456 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:19.456 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:19.456 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:19.456 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:19.456 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:19.456 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:20.390 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:20.390 14:06:21 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:21.327 14:06:22 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:21.327 14:06:22 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:21.327 14:06:22 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:21.327 14:06:22 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:21.327 14:06:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:21.327 14:06:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:21.327 14:06:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:21.327 14:06:22 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:21.327 14:06:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:21.585 14:06:22 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:21.586 14:06:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:21.586 14:06:22 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.876 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:24.876 Waiting for block devices as requested 00:04:24.876 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:24.876 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:25.135 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:25.135 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:25.135 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:25.135 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:25.394 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:25.394 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:25.394 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:25.653 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:25.653 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:25.653 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:25.912 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:25.912 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:25.912 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:25.912 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:26.171 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:26.171 14:06:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:26.171 14:06:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:26.171 14:06:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:26.171 14:06:26 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:26.171 14:06:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:26.171 14:06:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:26.171 14:06:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:26.171 14:06:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:26.171 14:06:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:26.171 14:06:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:26.171 14:06:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:26.171 14:06:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:26.171 14:06:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:26.171 14:06:26 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:26.171 14:06:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:26.171 14:06:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:26.171 14:06:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:26.171 14:06:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:26.171 14:06:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:26.171 14:06:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:26.171 14:06:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:26.171 14:06:26 -- common/autotest_common.sh@1543 -- # continue 00:04:26.171 14:06:26 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:26.171 14:06:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:26.171 14:06:26 -- common/autotest_common.sh@10 -- # set +x 00:04:26.171 14:06:26 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:26.171 14:06:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.171 14:06:26 -- common/autotest_common.sh@10 -- # set +x 00:04:26.171 14:06:26 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:29.461 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:29.461 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:29.461 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:29.461 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:29.720 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:29.720 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:29.720 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:29.720 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:29.720 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:29.720 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:29.720 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:29.720 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:29.720 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:29.720 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:29.720 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:29.720 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:29.720 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:30.658 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:30.658 14:06:31 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:30.658 14:06:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.658 14:06:31 -- common/autotest_common.sh@10 -- # set +x 00:04:30.658 14:06:31 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:30.658 14:06:31 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:30.658 14:06:31 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:30.658 14:06:31 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:30.658 14:06:31 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:30.658 14:06:31 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:30.658 14:06:31 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:30.658 14:06:31 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:30.658 14:06:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:30.658 14:06:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:30.658 14:06:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:30.658 14:06:31 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:30.658 14:06:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:30.658 14:06:31 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:30.658 14:06:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:30.658 14:06:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:30.658 14:06:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:30.658 14:06:31 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:30.658 14:06:31 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:30.658 14:06:31 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:30.658 14:06:31 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:30.658 14:06:31 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:30.658 14:06:31 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:30.658 14:06:31 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1434355 00:04:30.658 14:06:31 -- common/autotest_common.sh@1585 -- # waitforlisten 1434355 00:04:30.658 14:06:31 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.658 14:06:31 -- common/autotest_common.sh@835 -- # '[' -z 1434355 ']' 00:04:30.658 14:06:31 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.658 14:06:31 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.658 14:06:31 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.658 14:06:31 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.658 14:06:31 -- common/autotest_common.sh@10 -- # set +x 00:04:30.918 [2024-12-10 14:06:31.440740] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:04:30.918 [2024-12-10 14:06:31.440790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1434355 ] 00:04:30.918 [2024-12-10 14:06:31.520839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.918 [2024-12-10 14:06:31.560079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.853 14:06:32 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.853 14:06:32 -- common/autotest_common.sh@868 -- # return 0 00:04:31.853 14:06:32 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:31.853 14:06:32 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:31.853 14:06:32 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:35.141 nvme0n1 00:04:35.141 14:06:35 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:35.141 [2024-12-10 14:06:35.461309] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:35.141 [2024-12-10 14:06:35.461338] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:35.141 request: 00:04:35.141 { 00:04:35.141 "nvme_ctrlr_name": "nvme0", 00:04:35.141 "password": "test", 00:04:35.141 "method": "bdev_nvme_opal_revert", 00:04:35.141 "req_id": 1 00:04:35.141 } 00:04:35.141 Got JSON-RPC error response 00:04:35.141 response: 00:04:35.141 { 00:04:35.141 "code": -32603, 00:04:35.141 "message": "Internal error" 00:04:35.141 } 00:04:35.141 14:06:35 -- common/autotest_common.sh@1591 -- # true 00:04:35.141 14:06:35 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:35.141 14:06:35 -- common/autotest_common.sh@1595 -- # killprocess 1434355 00:04:35.141 14:06:35 -- common/autotest_common.sh@954 -- # '[' -z 1434355 ']' 00:04:35.141 14:06:35 -- common/autotest_common.sh@958 -- # kill -0 1434355 00:04:35.141 14:06:35 -- common/autotest_common.sh@959 -- # uname 00:04:35.141 14:06:35 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.141 14:06:35 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1434355 00:04:35.141 14:06:35 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.141 14:06:35 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.141 14:06:35 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1434355' 00:04:35.141 killing process with pid 1434355 00:04:35.141 14:06:35 -- common/autotest_common.sh@973 -- # kill 1434355 00:04:35.141 14:06:35 -- common/autotest_common.sh@978 -- # wait 1434355 00:04:36.517 14:06:37 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:36.517 14:06:37 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:36.517 14:06:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:36.517 14:06:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:36.517 14:06:37 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:36.517 14:06:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:36.517 14:06:37 -- common/autotest_common.sh@10 -- # set +x 00:04:36.517 14:06:37 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:36.517 14:06:37 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:36.517 14:06:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.517 14:06:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.517 14:06:37 -- common/autotest_common.sh@10 -- # set +x 00:04:36.517 ************************************ 00:04:36.517 START TEST env 00:04:36.517 ************************************ 00:04:36.517 14:06:37 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:36.776 * Looking for test storage... 00:04:36.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:36.776 14:06:37 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.776 14:06:37 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.776 14:06:37 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.776 14:06:37 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.776 14:06:37 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.776 14:06:37 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.776 14:06:37 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.776 14:06:37 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.776 14:06:37 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.776 14:06:37 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.776 14:06:37 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.776 14:06:37 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.776 14:06:37 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.776 14:06:37 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.776 14:06:37 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.776 14:06:37 env -- scripts/common.sh@344 -- # case "$op" in 00:04:36.776 14:06:37 env -- scripts/common.sh@345 -- # : 1 00:04:36.776 14:06:37 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.776 14:06:37 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.776 14:06:37 env -- scripts/common.sh@365 -- # decimal 1 00:04:36.776 14:06:37 env -- scripts/common.sh@353 -- # local d=1 00:04:36.776 14:06:37 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.776 14:06:37 env -- scripts/common.sh@355 -- # echo 1 00:04:36.776 14:06:37 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.776 14:06:37 env -- scripts/common.sh@366 -- # decimal 2 00:04:36.776 14:06:37 env -- scripts/common.sh@353 -- # local d=2 00:04:36.776 14:06:37 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.776 14:06:37 env -- scripts/common.sh@355 -- # echo 2 00:04:36.776 14:06:37 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.776 14:06:37 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.776 14:06:37 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.776 14:06:37 env -- scripts/common.sh@368 -- # return 0 00:04:36.776 14:06:37 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.777 14:06:37 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.777 --rc genhtml_branch_coverage=1 00:04:36.777 --rc genhtml_function_coverage=1 00:04:36.777 --rc genhtml_legend=1 00:04:36.777 --rc geninfo_all_blocks=1 00:04:36.777 --rc geninfo_unexecuted_blocks=1 00:04:36.777 00:04:36.777 ' 00:04:36.777 14:06:37 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.777 --rc genhtml_branch_coverage=1 00:04:36.777 --rc genhtml_function_coverage=1 00:04:36.777 --rc genhtml_legend=1 00:04:36.777 --rc geninfo_all_blocks=1 00:04:36.777 --rc geninfo_unexecuted_blocks=1 00:04:36.777 00:04:36.777 ' 00:04:36.777 14:06:37 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.777 --rc genhtml_branch_coverage=1 00:04:36.777 --rc genhtml_function_coverage=1 00:04:36.777 --rc genhtml_legend=1 00:04:36.777 --rc geninfo_all_blocks=1 00:04:36.777 --rc geninfo_unexecuted_blocks=1 00:04:36.777 00:04:36.777 ' 00:04:36.777 14:06:37 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.777 --rc genhtml_branch_coverage=1 00:04:36.777 --rc genhtml_function_coverage=1 00:04:36.777 --rc genhtml_legend=1 00:04:36.777 --rc geninfo_all_blocks=1 00:04:36.777 --rc geninfo_unexecuted_blocks=1 00:04:36.777 00:04:36.777 ' 00:04:36.777 14:06:37 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:36.777 14:06:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.777 14:06:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.777 14:06:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.777 ************************************ 00:04:36.777 START TEST env_memory 00:04:36.777 ************************************ 00:04:36.777 14:06:37 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:36.777 00:04:36.777 00:04:36.777 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.777 http://cunit.sourceforge.net/ 00:04:36.777 00:04:36.777 00:04:36.777 Suite: memory 00:04:36.777 Test: alloc and free memory map ...[2024-12-10 14:06:37.468456] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:36.777 passed 00:04:36.777 Test: mem map translation ...[2024-12-10 14:06:37.487058] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:36.777 [2024-12-10 14:06:37.487073] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:36.777 [2024-12-10 14:06:37.487108] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:36.777 [2024-12-10 14:06:37.487114] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:37.036 passed 00:04:37.036 Test: mem map registration ...[2024-12-10 14:06:37.524989] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:37.036 [2024-12-10 14:06:37.525003] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:37.036 passed 00:04:37.036 Test: mem map adjacent registrations ...passed 00:04:37.036 00:04:37.036 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.036 suites 1 1 n/a 0 0 00:04:37.036 tests 4 4 4 0 0 00:04:37.036 asserts 152 152 152 0 n/a 00:04:37.036 00:04:37.036 Elapsed time = 0.134 seconds 00:04:37.036 00:04:37.036 real 0m0.147s 00:04:37.036 user 0m0.138s 00:04:37.036 sys 0m0.008s 00:04:37.036 14:06:37 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.036 14:06:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:37.036 ************************************ 00:04:37.036 END TEST env_memory 00:04:37.036 ************************************ 00:04:37.036 14:06:37 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:37.036 14:06:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.036 14:06:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.036 14:06:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.036 ************************************ 00:04:37.036 START TEST env_vtophys 00:04:37.036 ************************************ 00:04:37.036 14:06:37 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:37.036 EAL: lib.eal log level changed from notice to debug 00:04:37.036 EAL: Detected lcore 0 as core 0 on socket 0 00:04:37.037 EAL: Detected lcore 1 as core 1 on socket 0 00:04:37.037 EAL: Detected lcore 2 as core 2 on socket 0 00:04:37.037 EAL: Detected lcore 3 as core 3 on socket 0 00:04:37.037 EAL: Detected lcore 4 as core 4 on socket 0 00:04:37.037 EAL: Detected lcore 5 as core 5 on socket 0 00:04:37.037 EAL: Detected lcore 6 as core 6 on socket 0 00:04:37.037 EAL: Detected lcore 7 as core 8 on socket 0 00:04:37.037 EAL: Detected lcore 8 as core 9 on socket 0 00:04:37.037 EAL: Detected lcore 9 as core 10 on socket 0 00:04:37.037 EAL: Detected lcore 10 as core 11 on socket 0 00:04:37.037 EAL: Detected lcore 11 as core 12 on socket 0 00:04:37.037 EAL: Detected lcore 12 as core 13 on socket 0 00:04:37.037 EAL: Detected lcore 13 as core 16 on socket 0 00:04:37.037 EAL: Detected lcore 14 as core 17 on socket 0 00:04:37.037 EAL: Detected lcore 15 as core 18 on socket 0 00:04:37.037 EAL: Detected lcore 16 as core 19 on socket 0 00:04:37.037 EAL: Detected lcore 17 as core 20 on socket 0 00:04:37.037 EAL: Detected lcore 18 as core 21 on socket 0 00:04:37.037 EAL: Detected lcore 19 as core 25 on socket 0 00:04:37.037 EAL: Detected lcore 20 as core 26 on socket 0 00:04:37.037 EAL: Detected lcore 21 as core 27 on socket 0 00:04:37.037 EAL: Detected lcore 22 as core 28 on socket 0 00:04:37.037 EAL: Detected lcore 23 as core 29 on socket 0 00:04:37.037 EAL: Detected lcore 24 as core 0 on socket 1 00:04:37.037 EAL: Detected lcore 25 as core 1 on socket 1 00:04:37.037 EAL: Detected lcore 26 as core 2 on socket 1 00:04:37.037 EAL: Detected lcore 27 as core 3 on socket 1 00:04:37.037 EAL: Detected lcore 28 as core 4 on socket 1 00:04:37.037 EAL: Detected lcore 29 as core 5 on socket 1 00:04:37.037 EAL: Detected lcore 30 as core 6 on socket 1 00:04:37.037 EAL: Detected lcore 31 as core 8 on socket 1 00:04:37.037 EAL: Detected lcore 32 as core 9 on socket 1 00:04:37.037 EAL: Detected lcore 33 as core 10 on socket 1 00:04:37.037 EAL: Detected lcore 34 as core 11 on socket 1 00:04:37.037 EAL: Detected lcore 35 as core 12 on socket 1 00:04:37.037 EAL: Detected lcore 36 as core 13 on socket 1 00:04:37.037 EAL: Detected lcore 37 as core 16 on socket 1 00:04:37.037 EAL: Detected lcore 38 as core 17 on socket 1 00:04:37.037 EAL: Detected lcore 39 as core 18 on socket 1 00:04:37.037 EAL: Detected lcore 40 as core 19 on socket 1 00:04:37.037 EAL: Detected lcore 41 as core 20 on socket 1 00:04:37.037 EAL: Detected lcore 42 as core 21 on socket 1 00:04:37.037 EAL: Detected lcore 43 as core 25 on socket 1 00:04:37.037 EAL: Detected lcore 44 as core 26 on socket 1 00:04:37.037 EAL: Detected lcore 45 as core 27 on socket 1 00:04:37.037 EAL: Detected lcore 46 as core 28 on socket 1 00:04:37.037 EAL: Detected lcore 47 as core 29 on socket 1 00:04:37.037 EAL: Detected lcore 48 as core 0 on socket 0 00:04:37.037 EAL: Detected lcore 49 as core 1 on socket 0 00:04:37.037 EAL: Detected lcore 50 as core 2 on socket 0 00:04:37.037 EAL: Detected lcore 51 as core 3 on socket 0 00:04:37.037 EAL: Detected lcore 52 as core 4 on socket 0 00:04:37.037 EAL: Detected lcore 53 as core 5 on socket 0 00:04:37.037 EAL: Detected lcore 54 as core 6 on socket 0 00:04:37.037 EAL: Detected lcore 55 as core 8 on socket 0 00:04:37.037 EAL: Detected lcore 56 as core 9 on socket 0 00:04:37.037 EAL: Detected lcore 57 as core 10 on socket 0 00:04:37.037 EAL: Detected lcore 58 as core 11 on socket 0 00:04:37.037 EAL: Detected lcore 59 as core 12 on socket 0 00:04:37.037 EAL: Detected lcore 60 as core 13 on socket 0 00:04:37.037 EAL: Detected lcore 61 as core 16 on socket 0 00:04:37.037 EAL: Detected lcore 62 as core 17 on socket 0 00:04:37.037 EAL: Detected lcore 63 as core 18 on socket 0 00:04:37.037 EAL: Detected lcore 64 as core 19 on socket 0 00:04:37.037 EAL: Detected lcore 65 as core 20 on socket 0 00:04:37.037 EAL: Detected lcore 66 as core 21 on socket 0 00:04:37.037 EAL: Detected lcore 67 as core 25 on socket 0 00:04:37.037 EAL: Detected lcore 68 as core 26 on socket 0 00:04:37.037 EAL: Detected lcore 69 as core 27 on socket 0 00:04:37.037 EAL: Detected lcore 70 as core 28 on socket 0 00:04:37.037 EAL: Detected lcore 71 as core 29 on socket 0 00:04:37.037 EAL: Detected lcore 72 as core 0 on socket 1 00:04:37.037 EAL: Detected lcore 73 as core 1 on socket 1 00:04:37.037 EAL: Detected lcore 74 as core 2 on socket 1 00:04:37.037 EAL: Detected lcore 75 as core 3 on socket 1 00:04:37.037 EAL: Detected lcore 76 as core 4 on socket 1 00:04:37.037 EAL: Detected lcore 77 as core 5 on socket 1 00:04:37.037 EAL: Detected lcore 78 as core 6 on socket 1 00:04:37.037 EAL: Detected lcore 79 as core 8 on socket 1 00:04:37.037 EAL: Detected lcore 80 as core 9 on socket 1 00:04:37.037 EAL: Detected lcore 81 as core 10 on socket 1 00:04:37.037 EAL: Detected lcore 82 as core 11 on socket 1 00:04:37.037 EAL: Detected lcore 83 as core 12 on socket 1 00:04:37.037 EAL: Detected lcore 84 as core 13 on socket 1 00:04:37.037 EAL: Detected lcore 85 as core 16 on socket 1 00:04:37.037 EAL: Detected lcore 86 as core 17 on socket 1 00:04:37.037 EAL: Detected lcore 87 as core 18 on socket 1 00:04:37.037 EAL: Detected lcore 88 as core 19 on socket 1 00:04:37.037 EAL: Detected lcore 89 as core 20 on socket 1 00:04:37.037 EAL: Detected lcore 90 as core 21 on socket 1 00:04:37.037 EAL: Detected lcore 91 as core 25 on socket 1 00:04:37.037 EAL: Detected lcore 92 as core 26 on socket 1 00:04:37.037 EAL: Detected lcore 93 as core 27 on socket 1 00:04:37.037 EAL: Detected lcore 94 as core 28 on socket 1 00:04:37.037 EAL: Detected lcore 95 as core 29 on socket 1 00:04:37.037 EAL: Maximum logical cores by configuration: 128 00:04:37.037 EAL: Detected CPU lcores: 96 00:04:37.037 EAL: Detected NUMA nodes: 2 00:04:37.037 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:37.037 EAL: Detected shared linkage of DPDK 00:04:37.037 EAL: No shared files mode enabled, IPC will be disabled 00:04:37.037 EAL: Bus pci wants IOVA as 'DC' 00:04:37.037 EAL: Buses did not request a specific IOVA mode. 00:04:37.037 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:37.037 EAL: Selected IOVA mode 'VA' 00:04:37.037 EAL: Probing VFIO support... 00:04:37.037 EAL: IOMMU type 1 (Type 1) is supported 00:04:37.037 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:37.037 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:37.037 EAL: VFIO support initialized 00:04:37.037 EAL: Ask a virtual area of 0x2e000 bytes 00:04:37.037 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:37.037 EAL: Setting up physically contiguous memory... 00:04:37.037 EAL: Setting maximum number of open files to 524288 00:04:37.037 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:37.037 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:37.037 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:37.037 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.037 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:37.037 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.037 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.037 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:37.037 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:37.037 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.037 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:37.037 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.037 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.037 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:37.037 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:37.037 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.037 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:37.037 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.037 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.037 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:37.037 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:37.037 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.037 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:37.037 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.037 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.037 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:37.037 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:37.037 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:37.037 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.037 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:37.037 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:37.037 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.037 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:37.037 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:37.037 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.037 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:37.037 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:37.037 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.037 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:37.037 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:37.037 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.037 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:37.037 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:37.037 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.037 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:37.037 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:37.037 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.037 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:37.037 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:37.037 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.037 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:37.037 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:37.037 EAL: Hugepages will be freed exactly as allocated. 00:04:37.037 EAL: No shared files mode enabled, IPC is disabled 00:04:37.037 EAL: No shared files mode enabled, IPC is disabled 00:04:37.037 EAL: TSC frequency is ~2100000 KHz 00:04:37.037 EAL: Main lcore 0 is ready (tid=7f7343003a00;cpuset=[0]) 00:04:37.037 EAL: Trying to obtain current memory policy. 00:04:37.037 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.037 EAL: Restoring previous memory policy: 0 00:04:37.037 EAL: request: mp_malloc_sync 00:04:37.037 EAL: No shared files mode enabled, IPC is disabled 00:04:37.037 EAL: Heap on socket 0 was expanded by 2MB 00:04:37.037 EAL: No shared files mode enabled, IPC is disabled 00:04:37.037 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:37.037 EAL: Mem event callback 'spdk:(nil)' registered 00:04:37.037 00:04:37.038 00:04:37.038 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.038 http://cunit.sourceforge.net/ 00:04:37.038 00:04:37.038 00:04:37.038 Suite: components_suite 00:04:37.038 Test: vtophys_malloc_test ...passed 00:04:37.038 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:37.038 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.038 EAL: Restoring previous memory policy: 4 00:04:37.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.038 EAL: request: mp_malloc_sync 00:04:37.038 EAL: No shared files mode enabled, IPC is disabled 00:04:37.038 EAL: Heap on socket 0 was expanded by 4MB 00:04:37.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.038 EAL: request: mp_malloc_sync 00:04:37.038 EAL: No shared files mode enabled, IPC is disabled 00:04:37.038 EAL: Heap on socket 0 was shrunk by 4MB 00:04:37.038 EAL: Trying to obtain current memory policy. 00:04:37.038 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.038 EAL: Restoring previous memory policy: 4 00:04:37.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.038 EAL: request: mp_malloc_sync 00:04:37.038 EAL: No shared files mode enabled, IPC is disabled 00:04:37.038 EAL: Heap on socket 0 was expanded by 6MB 00:04:37.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.038 EAL: request: mp_malloc_sync 00:04:37.038 EAL: No shared files mode enabled, IPC is disabled 00:04:37.038 EAL: Heap on socket 0 was shrunk by 6MB 00:04:37.038 EAL: Trying to obtain current memory policy. 00:04:37.038 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.038 EAL: Restoring previous memory policy: 4 00:04:37.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.038 EAL: request: mp_malloc_sync 00:04:37.038 EAL: No shared files mode enabled, IPC is disabled 00:04:37.038 EAL: Heap on socket 0 was expanded by 10MB 00:04:37.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.038 EAL: request: mp_malloc_sync 00:04:37.038 EAL: No shared files mode enabled, IPC is disabled 00:04:37.038 EAL: Heap on socket 0 was shrunk by 10MB 00:04:37.038 EAL: Trying to obtain current memory policy. 00:04:37.038 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.038 EAL: Restoring previous memory policy: 4 00:04:37.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.038 EAL: request: mp_malloc_sync 00:04:37.038 EAL: No shared files mode enabled, IPC is disabled 00:04:37.038 EAL: Heap on socket 0 was expanded by 18MB 00:04:37.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.038 EAL: request: mp_malloc_sync 00:04:37.038 EAL: No shared files mode enabled, IPC is disabled 00:04:37.038 EAL: Heap on socket 0 was shrunk by 18MB 00:04:37.038 EAL: Trying to obtain current memory policy. 00:04:37.038 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.038 EAL: Restoring previous memory policy: 4 00:04:37.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.038 EAL: request: mp_malloc_sync 00:04:37.038 EAL: No shared files mode enabled, IPC is disabled 00:04:37.038 EAL: Heap on socket 0 was expanded by 34MB 00:04:37.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.038 EAL: request: mp_malloc_sync 00:04:37.038 EAL: No shared files mode enabled, IPC is disabled 00:04:37.038 EAL: Heap on socket 0 was shrunk by 34MB 00:04:37.038 EAL: Trying to obtain current memory policy. 00:04:37.038 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.038 EAL: Restoring previous memory policy: 4 00:04:37.038 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.038 EAL: request: mp_malloc_sync 00:04:37.038 EAL: No shared files mode enabled, IPC is disabled 00:04:37.038 EAL: Heap on socket 0 was expanded by 66MB 00:04:37.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.297 EAL: request: mp_malloc_sync 00:04:37.297 EAL: No shared files mode enabled, IPC is disabled 00:04:37.297 EAL: Heap on socket 0 was shrunk by 66MB 00:04:37.297 EAL: Trying to obtain current memory policy. 00:04:37.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.297 EAL: Restoring previous memory policy: 4 00:04:37.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.297 EAL: request: mp_malloc_sync 00:04:37.297 EAL: No shared files mode enabled, IPC is disabled 00:04:37.297 EAL: Heap on socket 0 was expanded by 130MB 00:04:37.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.297 EAL: request: mp_malloc_sync 00:04:37.297 EAL: No shared files mode enabled, IPC is disabled 00:04:37.297 EAL: Heap on socket 0 was shrunk by 130MB 00:04:37.297 EAL: Trying to obtain current memory policy. 00:04:37.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.297 EAL: Restoring previous memory policy: 4 00:04:37.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.297 EAL: request: mp_malloc_sync 00:04:37.297 EAL: No shared files mode enabled, IPC is disabled 00:04:37.297 EAL: Heap on socket 0 was expanded by 258MB 00:04:37.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.297 EAL: request: mp_malloc_sync 00:04:37.297 EAL: No shared files mode enabled, IPC is disabled 00:04:37.297 EAL: Heap on socket 0 was shrunk by 258MB 00:04:37.297 EAL: Trying to obtain current memory policy. 00:04:37.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.556 EAL: Restoring previous memory policy: 4 00:04:37.556 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.556 EAL: request: mp_malloc_sync 00:04:37.556 EAL: No shared files mode enabled, IPC is disabled 00:04:37.556 EAL: Heap on socket 0 was expanded by 514MB 00:04:37.556 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.556 EAL: request: mp_malloc_sync 00:04:37.556 EAL: No shared files mode enabled, IPC is disabled 00:04:37.556 EAL: Heap on socket 0 was shrunk by 514MB 00:04:37.556 EAL: Trying to obtain current memory policy. 00:04:37.556 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.815 EAL: Restoring previous memory policy: 4 00:04:37.815 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.815 EAL: request: mp_malloc_sync 00:04:37.815 EAL: No shared files mode enabled, IPC is disabled 00:04:37.815 EAL: Heap on socket 0 was expanded by 1026MB 00:04:38.074 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.074 EAL: request: mp_malloc_sync 00:04:38.074 EAL: No shared files mode enabled, IPC is disabled 00:04:38.074 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:38.074 passed 00:04:38.074 00:04:38.074 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.074 suites 1 1 n/a 0 0 00:04:38.074 tests 2 2 2 0 0 00:04:38.074 asserts 497 497 497 0 n/a 00:04:38.074 00:04:38.074 Elapsed time = 0.972 seconds 00:04:38.074 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.074 EAL: request: mp_malloc_sync 00:04:38.074 EAL: No shared files mode enabled, IPC is disabled 00:04:38.074 EAL: Heap on socket 0 was shrunk by 2MB 00:04:38.074 EAL: No shared files mode enabled, IPC is disabled 00:04:38.074 EAL: No shared files mode enabled, IPC is disabled 00:04:38.074 EAL: No shared files mode enabled, IPC is disabled 00:04:38.074 00:04:38.074 real 0m1.108s 00:04:38.074 user 0m0.658s 00:04:38.074 sys 0m0.425s 00:04:38.074 14:06:38 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.074 14:06:38 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:38.074 ************************************ 00:04:38.074 END TEST env_vtophys 00:04:38.074 ************************************ 00:04:38.074 14:06:38 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:38.074 14:06:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.074 14:06:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.074 14:06:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.333 ************************************ 00:04:38.333 START TEST env_pci 00:04:38.333 ************************************ 00:04:38.333 14:06:38 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:38.333 00:04:38.333 00:04:38.333 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.333 http://cunit.sourceforge.net/ 00:04:38.333 00:04:38.333 00:04:38.333 Suite: pci 00:04:38.333 Test: pci_hook ...[2024-12-10 14:06:38.834769] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1435658 has claimed it 00:04:38.333 EAL: Cannot find device (10000:00:01.0) 00:04:38.333 EAL: Failed to attach device on primary process 00:04:38.333 passed 00:04:38.333 00:04:38.333 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.333 suites 1 1 n/a 0 0 00:04:38.333 tests 1 1 1 0 0 00:04:38.333 asserts 25 25 25 0 n/a 00:04:38.333 00:04:38.333 Elapsed time = 0.029 seconds 00:04:38.333 00:04:38.333 real 0m0.049s 00:04:38.333 user 0m0.017s 00:04:38.333 sys 0m0.031s 00:04:38.333 14:06:38 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.333 14:06:38 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:38.333 ************************************ 00:04:38.333 END TEST env_pci 00:04:38.333 ************************************ 00:04:38.333 14:06:38 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:38.333 14:06:38 env -- env/env.sh@15 -- # uname 00:04:38.333 14:06:38 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:38.333 14:06:38 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:38.333 14:06:38 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.333 14:06:38 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:38.333 14:06:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.333 14:06:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.333 ************************************ 00:04:38.333 START TEST env_dpdk_post_init 00:04:38.333 ************************************ 00:04:38.333 14:06:38 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.333 EAL: Detected CPU lcores: 96 00:04:38.333 EAL: Detected NUMA nodes: 2 00:04:38.333 EAL: Detected shared linkage of DPDK 00:04:38.333 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.333 EAL: Selected IOVA mode 'VA' 00:04:38.333 EAL: VFIO support initialized 00:04:38.333 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.592 EAL: Using IOMMU type 1 (Type 1) 00:04:38.592 EAL: Ignore mapping IO port bar(1) 00:04:38.592 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:38.592 EAL: Ignore mapping IO port bar(1) 00:04:38.592 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:38.592 EAL: Ignore mapping IO port bar(1) 00:04:38.592 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:38.592 EAL: Ignore mapping IO port bar(1) 00:04:38.592 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:38.592 EAL: Ignore mapping IO port bar(1) 00:04:38.592 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:38.592 EAL: Ignore mapping IO port bar(1) 00:04:38.592 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:38.592 EAL: Ignore mapping IO port bar(1) 00:04:38.592 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:38.592 EAL: Ignore mapping IO port bar(1) 00:04:38.592 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:39.532 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:39.532 EAL: Ignore mapping IO port bar(1) 00:04:39.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:39.532 EAL: Ignore mapping IO port bar(1) 00:04:39.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:39.532 EAL: Ignore mapping IO port bar(1) 00:04:39.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:39.532 EAL: Ignore mapping IO port bar(1) 00:04:39.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:39.532 EAL: Ignore mapping IO port bar(1) 00:04:39.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:39.532 EAL: Ignore mapping IO port bar(1) 00:04:39.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:39.532 EAL: Ignore mapping IO port bar(1) 00:04:39.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:39.532 EAL: Ignore mapping IO port bar(1) 00:04:39.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:42.818 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:42.818 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:42.818 Starting DPDK initialization... 00:04:42.818 Starting SPDK post initialization... 00:04:42.818 SPDK NVMe probe 00:04:42.818 Attaching to 0000:5e:00.0 00:04:42.818 Attached to 0000:5e:00.0 00:04:42.818 Cleaning up... 00:04:42.818 00:04:42.818 real 0m4.365s 00:04:42.818 user 0m2.966s 00:04:42.818 sys 0m0.469s 00:04:42.818 14:06:43 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.818 14:06:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.818 ************************************ 00:04:42.818 END TEST env_dpdk_post_init 00:04:42.818 ************************************ 00:04:42.818 14:06:43 env -- env/env.sh@26 -- # uname 00:04:42.818 14:06:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:42.818 14:06:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.818 14:06:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.818 14:06:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.818 14:06:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.818 ************************************ 00:04:42.818 START TEST env_mem_callbacks 00:04:42.818 ************************************ 00:04:42.818 14:06:43 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.818 EAL: Detected CPU lcores: 96 00:04:42.818 EAL: Detected NUMA nodes: 2 00:04:42.818 EAL: Detected shared linkage of DPDK 00:04:42.818 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:42.818 EAL: Selected IOVA mode 'VA' 00:04:42.818 EAL: VFIO support initialized 00:04:42.818 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:42.818 00:04:42.818 00:04:42.818 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.818 http://cunit.sourceforge.net/ 00:04:42.818 00:04:42.818 00:04:42.818 Suite: memory 00:04:42.818 Test: test ... 00:04:42.818 register 0x200000200000 2097152 00:04:42.818 malloc 3145728 00:04:42.818 register 0x200000400000 4194304 00:04:42.818 buf 0x200000500000 len 3145728 PASSED 00:04:42.818 malloc 64 00:04:42.818 buf 0x2000004fff40 len 64 PASSED 00:04:42.818 malloc 4194304 00:04:42.818 register 0x200000800000 6291456 00:04:42.818 buf 0x200000a00000 len 4194304 PASSED 00:04:42.818 free 0x200000500000 3145728 00:04:42.818 free 0x2000004fff40 64 00:04:42.818 unregister 0x200000400000 4194304 PASSED 00:04:42.818 free 0x200000a00000 4194304 00:04:42.818 unregister 0x200000800000 6291456 PASSED 00:04:42.818 malloc 8388608 00:04:42.818 register 0x200000400000 10485760 00:04:42.818 buf 0x200000600000 len 8388608 PASSED 00:04:42.818 free 0x200000600000 8388608 00:04:42.818 unregister 0x200000400000 10485760 PASSED 00:04:42.818 passed 00:04:42.818 00:04:42.818 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.818 suites 1 1 n/a 0 0 00:04:42.818 tests 1 1 1 0 0 00:04:42.818 asserts 15 15 15 0 n/a 00:04:42.818 00:04:42.818 Elapsed time = 0.008 seconds 00:04:42.818 00:04:42.818 real 0m0.064s 00:04:42.818 user 0m0.020s 00:04:42.818 sys 0m0.044s 00:04:42.818 14:06:43 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.818 14:06:43 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:42.818 ************************************ 00:04:42.818 END TEST env_mem_callbacks 00:04:42.818 ************************************ 00:04:42.818 00:04:42.818 real 0m6.265s 00:04:42.818 user 0m4.041s 00:04:42.818 sys 0m1.306s 00:04:42.818 14:06:43 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.818 14:06:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.818 ************************************ 00:04:42.818 END TEST env 00:04:42.818 ************************************ 00:04:42.819 14:06:43 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:42.819 14:06:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.819 14:06:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.819 14:06:43 -- common/autotest_common.sh@10 -- # set +x 00:04:42.819 ************************************ 00:04:42.819 START TEST rpc 00:04:42.819 ************************************ 00:04:42.819 14:06:43 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:43.078 * Looking for test storage... 00:04:43.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.078 14:06:43 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:43.078 14:06:43 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:43.078 14:06:43 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:43.078 14:06:43 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:43.078 14:06:43 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.078 14:06:43 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.078 14:06:43 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.078 14:06:43 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.078 14:06:43 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.078 14:06:43 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.078 14:06:43 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.078 14:06:43 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.078 14:06:43 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.078 14:06:43 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.078 14:06:43 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.078 14:06:43 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:43.078 14:06:43 rpc -- scripts/common.sh@345 -- # : 1 00:04:43.078 14:06:43 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.078 14:06:43 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.078 14:06:43 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:43.078 14:06:43 rpc -- scripts/common.sh@353 -- # local d=1 00:04:43.078 14:06:43 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.078 14:06:43 rpc -- scripts/common.sh@355 -- # echo 1 00:04:43.078 14:06:43 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.078 14:06:43 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:43.078 14:06:43 rpc -- scripts/common.sh@353 -- # local d=2 00:04:43.078 14:06:43 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.078 14:06:43 rpc -- scripts/common.sh@355 -- # echo 2 00:04:43.078 14:06:43 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.078 14:06:43 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.078 14:06:43 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.078 14:06:43 rpc -- scripts/common.sh@368 -- # return 0 00:04:43.078 14:06:43 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.078 14:06:43 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:43.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.078 --rc genhtml_branch_coverage=1 00:04:43.078 --rc genhtml_function_coverage=1 00:04:43.078 --rc genhtml_legend=1 00:04:43.078 --rc geninfo_all_blocks=1 00:04:43.078 --rc geninfo_unexecuted_blocks=1 00:04:43.078 00:04:43.078 ' 00:04:43.078 14:06:43 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:43.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.078 --rc genhtml_branch_coverage=1 00:04:43.078 --rc genhtml_function_coverage=1 00:04:43.078 --rc genhtml_legend=1 00:04:43.078 --rc geninfo_all_blocks=1 00:04:43.078 --rc geninfo_unexecuted_blocks=1 00:04:43.078 00:04:43.078 ' 00:04:43.078 14:06:43 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:43.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.078 --rc genhtml_branch_coverage=1 00:04:43.078 --rc genhtml_function_coverage=1 00:04:43.078 --rc genhtml_legend=1 00:04:43.078 --rc geninfo_all_blocks=1 00:04:43.078 --rc geninfo_unexecuted_blocks=1 00:04:43.078 00:04:43.078 ' 00:04:43.078 14:06:43 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:43.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.078 --rc genhtml_branch_coverage=1 00:04:43.078 --rc genhtml_function_coverage=1 00:04:43.078 --rc genhtml_legend=1 00:04:43.078 --rc geninfo_all_blocks=1 00:04:43.078 --rc geninfo_unexecuted_blocks=1 00:04:43.078 00:04:43.078 ' 00:04:43.078 14:06:43 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1436685 00:04:43.078 14:06:43 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.078 14:06:43 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:43.078 14:06:43 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1436685 00:04:43.078 14:06:43 rpc -- common/autotest_common.sh@835 -- # '[' -z 1436685 ']' 00:04:43.078 14:06:43 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.078 14:06:43 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.078 14:06:43 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.078 14:06:43 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.078 14:06:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.078 [2024-12-10 14:06:43.788133] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:04:43.078 [2024-12-10 14:06:43.788182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436685 ] 00:04:43.349 [2024-12-10 14:06:43.869536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.349 [2024-12-10 14:06:43.909352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:43.349 [2024-12-10 14:06:43.909388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1436685' to capture a snapshot of events at runtime. 00:04:43.349 [2024-12-10 14:06:43.909395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:43.349 [2024-12-10 14:06:43.909401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:43.349 [2024-12-10 14:06:43.909406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1436685 for offline analysis/debug. 00:04:43.349 [2024-12-10 14:06:43.909945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.608 14:06:44 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.608 14:06:44 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:43.608 14:06:44 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.608 14:06:44 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.608 14:06:44 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:43.608 14:06:44 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:43.608 14:06:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.608 14:06:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.608 14:06:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.608 ************************************ 00:04:43.608 START TEST rpc_integrity 00:04:43.608 ************************************ 00:04:43.608 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:43.608 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.608 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.608 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.608 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.608 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.608 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.608 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.608 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.608 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.608 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.608 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.608 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:43.608 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.608 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.608 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.608 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.608 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.608 { 00:04:43.608 "name": "Malloc0", 00:04:43.608 "aliases": [ 00:04:43.608 "91d0e05c-bfa3-4171-b354-70ca5ef9eb42" 00:04:43.608 ], 00:04:43.608 "product_name": "Malloc disk", 00:04:43.608 "block_size": 512, 00:04:43.608 "num_blocks": 16384, 00:04:43.608 "uuid": "91d0e05c-bfa3-4171-b354-70ca5ef9eb42", 00:04:43.608 "assigned_rate_limits": { 00:04:43.608 "rw_ios_per_sec": 0, 00:04:43.608 "rw_mbytes_per_sec": 0, 00:04:43.608 "r_mbytes_per_sec": 0, 00:04:43.608 "w_mbytes_per_sec": 0 00:04:43.608 }, 00:04:43.608 "claimed": false, 00:04:43.608 "zoned": false, 00:04:43.608 "supported_io_types": { 00:04:43.608 "read": true, 00:04:43.608 "write": true, 00:04:43.608 "unmap": true, 00:04:43.608 "flush": true, 00:04:43.608 "reset": true, 00:04:43.608 "nvme_admin": false, 00:04:43.608 "nvme_io": false, 00:04:43.608 "nvme_io_md": false, 00:04:43.608 "write_zeroes": true, 00:04:43.608 "zcopy": true, 00:04:43.608 "get_zone_info": false, 00:04:43.608 "zone_management": false, 00:04:43.608 "zone_append": false, 00:04:43.608 "compare": false, 00:04:43.608 "compare_and_write": false, 00:04:43.608 "abort": true, 00:04:43.608 "seek_hole": false, 00:04:43.608 "seek_data": false, 00:04:43.608 "copy": true, 00:04:43.608 "nvme_iov_md": false 00:04:43.608 }, 00:04:43.608 "memory_domains": [ 00:04:43.608 { 00:04:43.608 "dma_device_id": "system", 00:04:43.608 "dma_device_type": 1 00:04:43.608 }, 00:04:43.608 { 00:04:43.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.608 "dma_device_type": 2 00:04:43.608 } 00:04:43.608 ], 00:04:43.608 "driver_specific": {} 00:04:43.608 } 00:04:43.608 ]' 00:04:43.608 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.608 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.608 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:43.608 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.608 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.608 [2024-12-10 14:06:44.284468] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:43.608 [2024-12-10 14:06:44.284499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.608 [2024-12-10 14:06:44.284511] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x130da40 00:04:43.609 [2024-12-10 14:06:44.284517] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.609 [2024-12-10 14:06:44.285582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.609 [2024-12-10 14:06:44.285604] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.609 Passthru0 00:04:43.609 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.609 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.609 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.609 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.609 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.609 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.609 { 00:04:43.609 "name": "Malloc0", 00:04:43.609 "aliases": [ 00:04:43.609 "91d0e05c-bfa3-4171-b354-70ca5ef9eb42" 00:04:43.609 ], 00:04:43.609 "product_name": "Malloc disk", 00:04:43.609 "block_size": 512, 00:04:43.609 "num_blocks": 16384, 00:04:43.609 "uuid": "91d0e05c-bfa3-4171-b354-70ca5ef9eb42", 00:04:43.609 "assigned_rate_limits": { 00:04:43.609 "rw_ios_per_sec": 0, 00:04:43.609 "rw_mbytes_per_sec": 0, 00:04:43.609 "r_mbytes_per_sec": 0, 00:04:43.609 "w_mbytes_per_sec": 0 00:04:43.609 }, 00:04:43.609 "claimed": true, 00:04:43.609 "claim_type": "exclusive_write", 00:04:43.609 "zoned": false, 00:04:43.609 "supported_io_types": { 00:04:43.609 "read": true, 00:04:43.609 "write": true, 00:04:43.609 "unmap": true, 00:04:43.609 "flush": true, 00:04:43.609 "reset": true, 00:04:43.609 "nvme_admin": false, 00:04:43.609 "nvme_io": false, 00:04:43.609 "nvme_io_md": false, 00:04:43.609 "write_zeroes": true, 00:04:43.609 "zcopy": true, 00:04:43.609 "get_zone_info": false, 00:04:43.609 "zone_management": false, 00:04:43.609 "zone_append": false, 00:04:43.609 "compare": false, 00:04:43.609 "compare_and_write": false, 00:04:43.609 "abort": true, 00:04:43.609 "seek_hole": false, 00:04:43.609 "seek_data": false, 00:04:43.609 "copy": true, 00:04:43.609 "nvme_iov_md": false 00:04:43.609 }, 00:04:43.609 "memory_domains": [ 00:04:43.609 { 00:04:43.609 "dma_device_id": "system", 00:04:43.609 "dma_device_type": 1 00:04:43.609 }, 00:04:43.609 { 00:04:43.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.609 "dma_device_type": 2 00:04:43.609 } 00:04:43.609 ], 00:04:43.609 "driver_specific": {} 00:04:43.609 }, 00:04:43.609 { 00:04:43.609 "name": "Passthru0", 00:04:43.609 "aliases": [ 00:04:43.609 "d1bf9b89-48e1-5f93-a434-5c5696a3efa4" 00:04:43.609 ], 00:04:43.609 "product_name": "passthru", 00:04:43.609 "block_size": 512, 00:04:43.609 "num_blocks": 16384, 00:04:43.609 "uuid": "d1bf9b89-48e1-5f93-a434-5c5696a3efa4", 00:04:43.609 "assigned_rate_limits": { 00:04:43.609 "rw_ios_per_sec": 0, 00:04:43.609 "rw_mbytes_per_sec": 0, 00:04:43.609 "r_mbytes_per_sec": 0, 00:04:43.609 "w_mbytes_per_sec": 0 00:04:43.609 }, 00:04:43.609 "claimed": false, 00:04:43.609 "zoned": false, 00:04:43.609 "supported_io_types": { 00:04:43.609 "read": true, 00:04:43.609 "write": true, 00:04:43.609 "unmap": true, 00:04:43.609 "flush": true, 00:04:43.609 "reset": true, 00:04:43.609 "nvme_admin": false, 00:04:43.609 "nvme_io": false, 00:04:43.609 "nvme_io_md": false, 00:04:43.609 "write_zeroes": true, 00:04:43.609 "zcopy": true, 00:04:43.609 "get_zone_info": false, 00:04:43.609 "zone_management": false, 00:04:43.609 "zone_append": false, 00:04:43.609 "compare": false, 00:04:43.609 "compare_and_write": false, 00:04:43.609 "abort": true, 00:04:43.609 "seek_hole": false, 00:04:43.609 "seek_data": false, 00:04:43.609 "copy": true, 00:04:43.609 "nvme_iov_md": false 00:04:43.609 }, 00:04:43.609 "memory_domains": [ 00:04:43.609 { 00:04:43.609 "dma_device_id": "system", 00:04:43.609 "dma_device_type": 1 00:04:43.609 }, 00:04:43.609 { 00:04:43.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.609 "dma_device_type": 2 00:04:43.609 } 00:04:43.609 ], 00:04:43.609 "driver_specific": { 00:04:43.609 "passthru": { 00:04:43.609 "name": "Passthru0", 00:04:43.609 "base_bdev_name": "Malloc0" 00:04:43.609 } 00:04:43.609 } 00:04:43.609 } 00:04:43.609 ]' 00:04:43.609 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.867 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.867 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.867 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.867 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.867 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.867 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:43.867 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.867 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.867 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.868 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.868 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.868 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.868 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.868 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.868 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.868 14:06:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.868 00:04:43.868 real 0m0.277s 00:04:43.868 user 0m0.173s 00:04:43.868 sys 0m0.036s 00:04:43.868 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.868 14:06:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.868 ************************************ 00:04:43.868 END TEST rpc_integrity 00:04:43.868 ************************************ 00:04:43.868 14:06:44 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:43.868 14:06:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.868 14:06:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.868 14:06:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.868 ************************************ 00:04:43.868 START TEST rpc_plugins 00:04:43.868 ************************************ 00:04:43.868 14:06:44 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:43.868 14:06:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:43.868 14:06:44 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.868 14:06:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.868 14:06:44 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.868 14:06:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:43.868 14:06:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:43.868 14:06:44 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.868 14:06:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.868 14:06:44 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.868 14:06:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:43.868 { 00:04:43.868 "name": "Malloc1", 00:04:43.868 "aliases": [ 00:04:43.868 "5c54b2d9-f40d-4ae7-b52f-35f7799935c8" 00:04:43.868 ], 00:04:43.868 "product_name": "Malloc disk", 00:04:43.868 "block_size": 4096, 00:04:43.868 "num_blocks": 256, 00:04:43.868 "uuid": "5c54b2d9-f40d-4ae7-b52f-35f7799935c8", 00:04:43.868 "assigned_rate_limits": { 00:04:43.868 "rw_ios_per_sec": 0, 00:04:43.868 "rw_mbytes_per_sec": 0, 00:04:43.868 "r_mbytes_per_sec": 0, 00:04:43.868 "w_mbytes_per_sec": 0 00:04:43.868 }, 00:04:43.868 "claimed": false, 00:04:43.868 "zoned": false, 00:04:43.868 "supported_io_types": { 00:04:43.868 "read": true, 00:04:43.868 "write": true, 00:04:43.868 "unmap": true, 00:04:43.868 "flush": true, 00:04:43.868 "reset": true, 00:04:43.868 "nvme_admin": false, 00:04:43.868 "nvme_io": false, 00:04:43.868 "nvme_io_md": false, 00:04:43.868 "write_zeroes": true, 00:04:43.868 "zcopy": true, 00:04:43.868 "get_zone_info": false, 00:04:43.868 "zone_management": false, 00:04:43.868 "zone_append": false, 00:04:43.868 "compare": false, 00:04:43.868 "compare_and_write": false, 00:04:43.868 "abort": true, 00:04:43.868 "seek_hole": false, 00:04:43.868 "seek_data": false, 00:04:43.868 "copy": true, 00:04:43.868 "nvme_iov_md": false 00:04:43.868 }, 00:04:43.868 "memory_domains": [ 00:04:43.868 { 00:04:43.868 "dma_device_id": "system", 00:04:43.868 "dma_device_type": 1 00:04:43.868 }, 00:04:43.868 { 00:04:43.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.868 "dma_device_type": 2 00:04:43.868 } 00:04:43.868 ], 00:04:43.868 "driver_specific": {} 00:04:43.868 } 00:04:43.868 ]' 00:04:43.868 14:06:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:43.868 14:06:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:43.868 14:06:44 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:43.868 14:06:44 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.868 14:06:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.868 14:06:44 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.868 14:06:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:43.868 14:06:44 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.868 14:06:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.868 14:06:44 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.868 14:06:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:43.868 14:06:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:44.126 14:06:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:44.126 00:04:44.126 real 0m0.144s 00:04:44.126 user 0m0.087s 00:04:44.126 sys 0m0.020s 00:04:44.126 14:06:44 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.126 14:06:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.126 ************************************ 00:04:44.126 END TEST rpc_plugins 00:04:44.126 ************************************ 00:04:44.126 14:06:44 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:44.126 14:06:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.126 14:06:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.126 14:06:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.126 ************************************ 00:04:44.126 START TEST rpc_trace_cmd_test 00:04:44.126 ************************************ 00:04:44.127 14:06:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:44.127 14:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:44.127 14:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:44.127 14:06:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.127 14:06:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.127 14:06:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.127 14:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:44.127 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1436685", 00:04:44.127 "tpoint_group_mask": "0x8", 00:04:44.127 "iscsi_conn": { 00:04:44.127 "mask": "0x2", 00:04:44.127 "tpoint_mask": "0x0" 00:04:44.127 }, 00:04:44.127 "scsi": { 00:04:44.127 "mask": "0x4", 00:04:44.127 "tpoint_mask": "0x0" 00:04:44.127 }, 00:04:44.127 "bdev": { 00:04:44.127 "mask": "0x8", 00:04:44.127 "tpoint_mask": "0xffffffffffffffff" 00:04:44.127 }, 00:04:44.127 "nvmf_rdma": { 00:04:44.127 "mask": "0x10", 00:04:44.127 "tpoint_mask": "0x0" 00:04:44.127 }, 00:04:44.127 "nvmf_tcp": { 00:04:44.127 "mask": "0x20", 00:04:44.127 "tpoint_mask": "0x0" 00:04:44.127 }, 00:04:44.127 "ftl": { 00:04:44.127 "mask": "0x40", 00:04:44.127 "tpoint_mask": "0x0" 00:04:44.127 }, 00:04:44.127 "blobfs": { 00:04:44.127 "mask": "0x80", 00:04:44.127 "tpoint_mask": "0x0" 00:04:44.127 }, 00:04:44.127 "dsa": { 00:04:44.127 "mask": "0x200", 00:04:44.127 "tpoint_mask": "0x0" 00:04:44.127 }, 00:04:44.127 "thread": { 00:04:44.127 "mask": "0x400", 00:04:44.127 "tpoint_mask": "0x0" 00:04:44.127 }, 00:04:44.127 "nvme_pcie": { 00:04:44.127 "mask": "0x800", 00:04:44.127 "tpoint_mask": "0x0" 00:04:44.127 }, 00:04:44.127 "iaa": { 00:04:44.127 "mask": "0x1000", 00:04:44.127 "tpoint_mask": "0x0" 00:04:44.127 }, 00:04:44.127 "nvme_tcp": { 00:04:44.127 "mask": "0x2000", 00:04:44.127 "tpoint_mask": "0x0" 00:04:44.127 }, 00:04:44.127 "bdev_nvme": { 00:04:44.127 "mask": "0x4000", 00:04:44.127 "tpoint_mask": "0x0" 00:04:44.127 }, 00:04:44.127 "sock": { 00:04:44.127 "mask": "0x8000", 00:04:44.127 "tpoint_mask": "0x0" 00:04:44.127 }, 00:04:44.127 "blob": { 00:04:44.127 "mask": "0x10000", 00:04:44.127 "tpoint_mask": "0x0" 00:04:44.127 }, 00:04:44.127 "bdev_raid": { 00:04:44.127 "mask": "0x20000", 00:04:44.127 "tpoint_mask": "0x0" 00:04:44.127 }, 00:04:44.127 "scheduler": { 00:04:44.127 "mask": "0x40000", 00:04:44.127 "tpoint_mask": "0x0" 00:04:44.127 } 00:04:44.127 }' 00:04:44.127 14:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:44.127 14:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:44.127 14:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:44.127 14:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:44.127 14:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:44.127 14:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:44.127 14:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:44.385 14:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:44.385 14:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:44.385 14:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:44.385 00:04:44.385 real 0m0.231s 00:04:44.385 user 0m0.199s 00:04:44.385 sys 0m0.024s 00:04:44.385 14:06:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.385 14:06:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.385 ************************************ 00:04:44.385 END TEST rpc_trace_cmd_test 00:04:44.385 ************************************ 00:04:44.385 14:06:44 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:44.385 14:06:44 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:44.385 14:06:44 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:44.385 14:06:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.385 14:06:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.385 14:06:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.385 ************************************ 00:04:44.385 START TEST rpc_daemon_integrity 00:04:44.385 ************************************ 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.385 { 00:04:44.385 "name": "Malloc2", 00:04:44.385 "aliases": [ 00:04:44.385 "062452d6-e203-4887-b7ad-42f5bb1323a0" 00:04:44.385 ], 00:04:44.385 "product_name": "Malloc disk", 00:04:44.385 "block_size": 512, 00:04:44.385 "num_blocks": 16384, 00:04:44.385 "uuid": "062452d6-e203-4887-b7ad-42f5bb1323a0", 00:04:44.385 "assigned_rate_limits": { 00:04:44.385 "rw_ios_per_sec": 0, 00:04:44.385 "rw_mbytes_per_sec": 0, 00:04:44.385 "r_mbytes_per_sec": 0, 00:04:44.385 "w_mbytes_per_sec": 0 00:04:44.385 }, 00:04:44.385 "claimed": false, 00:04:44.385 "zoned": false, 00:04:44.385 "supported_io_types": { 00:04:44.385 "read": true, 00:04:44.385 "write": true, 00:04:44.385 "unmap": true, 00:04:44.385 "flush": true, 00:04:44.385 "reset": true, 00:04:44.385 "nvme_admin": false, 00:04:44.385 "nvme_io": false, 00:04:44.385 "nvme_io_md": false, 00:04:44.385 "write_zeroes": true, 00:04:44.385 "zcopy": true, 00:04:44.385 "get_zone_info": false, 00:04:44.385 "zone_management": false, 00:04:44.385 "zone_append": false, 00:04:44.385 "compare": false, 00:04:44.385 "compare_and_write": false, 00:04:44.385 "abort": true, 00:04:44.385 "seek_hole": false, 00:04:44.385 "seek_data": false, 00:04:44.385 "copy": true, 00:04:44.385 "nvme_iov_md": false 00:04:44.385 }, 00:04:44.385 "memory_domains": [ 00:04:44.385 { 00:04:44.385 "dma_device_id": "system", 00:04:44.385 "dma_device_type": 1 00:04:44.385 }, 00:04:44.385 { 00:04:44.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.385 "dma_device_type": 2 00:04:44.385 } 00:04:44.385 ], 00:04:44.385 "driver_specific": {} 00:04:44.385 } 00:04:44.385 ]' 00:04:44.385 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.644 [2024-12-10 14:06:45.142781] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:44.644 [2024-12-10 14:06:45.142810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.644 [2024-12-10 14:06:45.142822] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12db2e0 00:04:44.644 [2024-12-10 14:06:45.142828] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.644 [2024-12-10 14:06:45.143792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.644 [2024-12-10 14:06:45.143813] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.644 Passthru0 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.644 { 00:04:44.644 "name": "Malloc2", 00:04:44.644 "aliases": [ 00:04:44.644 "062452d6-e203-4887-b7ad-42f5bb1323a0" 00:04:44.644 ], 00:04:44.644 "product_name": "Malloc disk", 00:04:44.644 "block_size": 512, 00:04:44.644 "num_blocks": 16384, 00:04:44.644 "uuid": "062452d6-e203-4887-b7ad-42f5bb1323a0", 00:04:44.644 "assigned_rate_limits": { 00:04:44.644 "rw_ios_per_sec": 0, 00:04:44.644 "rw_mbytes_per_sec": 0, 00:04:44.644 "r_mbytes_per_sec": 0, 00:04:44.644 "w_mbytes_per_sec": 0 00:04:44.644 }, 00:04:44.644 "claimed": true, 00:04:44.644 "claim_type": "exclusive_write", 00:04:44.644 "zoned": false, 00:04:44.644 "supported_io_types": { 00:04:44.644 "read": true, 00:04:44.644 "write": true, 00:04:44.644 "unmap": true, 00:04:44.644 "flush": true, 00:04:44.644 "reset": true, 00:04:44.644 "nvme_admin": false, 00:04:44.644 "nvme_io": false, 00:04:44.644 "nvme_io_md": false, 00:04:44.644 "write_zeroes": true, 00:04:44.644 "zcopy": true, 00:04:44.644 "get_zone_info": false, 00:04:44.644 "zone_management": false, 00:04:44.644 "zone_append": false, 00:04:44.644 "compare": false, 00:04:44.644 "compare_and_write": false, 00:04:44.644 "abort": true, 00:04:44.644 "seek_hole": false, 00:04:44.644 "seek_data": false, 00:04:44.644 "copy": true, 00:04:44.644 "nvme_iov_md": false 00:04:44.644 }, 00:04:44.644 "memory_domains": [ 00:04:44.644 { 00:04:44.644 "dma_device_id": "system", 00:04:44.644 "dma_device_type": 1 00:04:44.644 }, 00:04:44.644 { 00:04:44.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.644 "dma_device_type": 2 00:04:44.644 } 00:04:44.644 ], 00:04:44.644 "driver_specific": {} 00:04:44.644 }, 00:04:44.644 { 00:04:44.644 "name": "Passthru0", 00:04:44.644 "aliases": [ 00:04:44.644 "95b76dce-897d-5d1e-ad6a-70a0a253f717" 00:04:44.644 ], 00:04:44.644 "product_name": "passthru", 00:04:44.644 "block_size": 512, 00:04:44.644 "num_blocks": 16384, 00:04:44.644 "uuid": "95b76dce-897d-5d1e-ad6a-70a0a253f717", 00:04:44.644 "assigned_rate_limits": { 00:04:44.644 "rw_ios_per_sec": 0, 00:04:44.644 "rw_mbytes_per_sec": 0, 00:04:44.644 "r_mbytes_per_sec": 0, 00:04:44.644 "w_mbytes_per_sec": 0 00:04:44.644 }, 00:04:44.644 "claimed": false, 00:04:44.644 "zoned": false, 00:04:44.644 "supported_io_types": { 00:04:44.644 "read": true, 00:04:44.644 "write": true, 00:04:44.644 "unmap": true, 00:04:44.644 "flush": true, 00:04:44.644 "reset": true, 00:04:44.644 "nvme_admin": false, 00:04:44.644 "nvme_io": false, 00:04:44.644 "nvme_io_md": false, 00:04:44.644 "write_zeroes": true, 00:04:44.644 "zcopy": true, 00:04:44.644 "get_zone_info": false, 00:04:44.644 "zone_management": false, 00:04:44.644 "zone_append": false, 00:04:44.644 "compare": false, 00:04:44.644 "compare_and_write": false, 00:04:44.644 "abort": true, 00:04:44.644 "seek_hole": false, 00:04:44.644 "seek_data": false, 00:04:44.644 "copy": true, 00:04:44.644 "nvme_iov_md": false 00:04:44.644 }, 00:04:44.644 "memory_domains": [ 00:04:44.644 { 00:04:44.644 "dma_device_id": "system", 00:04:44.644 "dma_device_type": 1 00:04:44.644 }, 00:04:44.644 { 00:04:44.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.644 "dma_device_type": 2 00:04:44.644 } 00:04:44.644 ], 00:04:44.644 "driver_specific": { 00:04:44.644 "passthru": { 00:04:44.644 "name": "Passthru0", 00:04:44.644 "base_bdev_name": "Malloc2" 00:04:44.644 } 00:04:44.644 } 00:04:44.644 } 00:04:44.644 ]' 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.644 00:04:44.644 real 0m0.283s 00:04:44.644 user 0m0.188s 00:04:44.644 sys 0m0.032s 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.644 14:06:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.644 ************************************ 00:04:44.644 END TEST rpc_daemon_integrity 00:04:44.644 ************************************ 00:04:44.644 14:06:45 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:44.644 14:06:45 rpc -- rpc/rpc.sh@84 -- # killprocess 1436685 00:04:44.644 14:06:45 rpc -- common/autotest_common.sh@954 -- # '[' -z 1436685 ']' 00:04:44.644 14:06:45 rpc -- common/autotest_common.sh@958 -- # kill -0 1436685 00:04:44.644 14:06:45 rpc -- common/autotest_common.sh@959 -- # uname 00:04:44.644 14:06:45 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.644 14:06:45 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1436685 00:04:44.644 14:06:45 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.644 14:06:45 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.644 14:06:45 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1436685' 00:04:44.644 killing process with pid 1436685 00:04:44.644 14:06:45 rpc -- common/autotest_common.sh@973 -- # kill 1436685 00:04:44.644 14:06:45 rpc -- common/autotest_common.sh@978 -- # wait 1436685 00:04:45.212 00:04:45.212 real 0m2.110s 00:04:45.212 user 0m2.709s 00:04:45.212 sys 0m0.682s 00:04:45.212 14:06:45 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.212 14:06:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.212 ************************************ 00:04:45.212 END TEST rpc 00:04:45.212 ************************************ 00:04:45.212 14:06:45 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:45.212 14:06:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.212 14:06:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.212 14:06:45 -- common/autotest_common.sh@10 -- # set +x 00:04:45.212 ************************************ 00:04:45.212 START TEST skip_rpc 00:04:45.212 ************************************ 00:04:45.212 14:06:45 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:45.212 * Looking for test storage... 00:04:45.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:45.212 14:06:45 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:45.212 14:06:45 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:45.212 14:06:45 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:45.212 14:06:45 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.212 14:06:45 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:45.212 14:06:45 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.212 14:06:45 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:45.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.212 --rc genhtml_branch_coverage=1 00:04:45.212 --rc genhtml_function_coverage=1 00:04:45.212 --rc genhtml_legend=1 00:04:45.212 --rc geninfo_all_blocks=1 00:04:45.212 --rc geninfo_unexecuted_blocks=1 00:04:45.212 00:04:45.212 ' 00:04:45.212 14:06:45 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:45.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.212 --rc genhtml_branch_coverage=1 00:04:45.212 --rc genhtml_function_coverage=1 00:04:45.212 --rc genhtml_legend=1 00:04:45.212 --rc geninfo_all_blocks=1 00:04:45.212 --rc geninfo_unexecuted_blocks=1 00:04:45.212 00:04:45.212 ' 00:04:45.213 14:06:45 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:45.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.213 --rc genhtml_branch_coverage=1 00:04:45.213 --rc genhtml_function_coverage=1 00:04:45.213 --rc genhtml_legend=1 00:04:45.213 --rc geninfo_all_blocks=1 00:04:45.213 --rc geninfo_unexecuted_blocks=1 00:04:45.213 00:04:45.213 ' 00:04:45.213 14:06:45 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:45.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.213 --rc genhtml_branch_coverage=1 00:04:45.213 --rc genhtml_function_coverage=1 00:04:45.213 --rc genhtml_legend=1 00:04:45.213 --rc geninfo_all_blocks=1 00:04:45.213 --rc geninfo_unexecuted_blocks=1 00:04:45.213 00:04:45.213 ' 00:04:45.213 14:06:45 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:45.213 14:06:45 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:45.213 14:06:45 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:45.213 14:06:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.213 14:06:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.213 14:06:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.213 ************************************ 00:04:45.213 START TEST skip_rpc 00:04:45.213 ************************************ 00:04:45.213 14:06:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:45.213 14:06:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1437110 00:04:45.213 14:06:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.213 14:06:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:45.213 14:06:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:45.472 [2024-12-10 14:06:45.978146] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:04:45.472 [2024-12-10 14:06:45.978183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437110 ] 00:04:45.472 [2024-12-10 14:06:46.058270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.472 [2024-12-10 14:06:46.097617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1437110 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1437110 ']' 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1437110 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1437110 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1437110' 00:04:50.740 killing process with pid 1437110 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1437110 00:04:50.740 14:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1437110 00:04:50.740 00:04:50.740 real 0m5.362s 00:04:50.740 user 0m5.109s 00:04:50.740 sys 0m0.288s 00:04:50.740 14:06:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.740 14:06:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.740 ************************************ 00:04:50.740 END TEST skip_rpc 00:04:50.740 ************************************ 00:04:50.740 14:06:51 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:50.740 14:06:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.740 14:06:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.740 14:06:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.740 ************************************ 00:04:50.740 START TEST skip_rpc_with_json 00:04:50.740 ************************************ 00:04:50.740 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:50.741 14:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:50.741 14:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1438050 00:04:50.741 14:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.741 14:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.741 14:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1438050 00:04:50.741 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1438050 ']' 00:04:50.741 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.741 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.741 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.741 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.741 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.741 [2024-12-10 14:06:51.413605] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:04:50.741 [2024-12-10 14:06:51.413650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438050 ] 00:04:50.999 [2024-12-10 14:06:51.491011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.999 [2024-12-10 14:06:51.531240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.258 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.258 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:51.258 14:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:51.258 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.258 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.258 [2024-12-10 14:06:51.746789] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:51.258 request: 00:04:51.258 { 00:04:51.258 "trtype": "tcp", 00:04:51.258 "method": "nvmf_get_transports", 00:04:51.258 "req_id": 1 00:04:51.258 } 00:04:51.258 Got JSON-RPC error response 00:04:51.258 response: 00:04:51.258 { 00:04:51.258 "code": -19, 00:04:51.258 "message": "No such device" 00:04:51.258 } 00:04:51.258 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:51.258 14:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:51.258 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.258 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.258 [2024-12-10 14:06:51.758896] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.258 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.258 14:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:51.258 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.258 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.259 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.259 14:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:51.259 { 00:04:51.259 "subsystems": [ 00:04:51.259 { 00:04:51.259 "subsystem": "fsdev", 00:04:51.259 "config": [ 00:04:51.259 { 00:04:51.259 "method": "fsdev_set_opts", 00:04:51.259 "params": { 00:04:51.259 "fsdev_io_pool_size": 65535, 00:04:51.259 "fsdev_io_cache_size": 256 00:04:51.259 } 00:04:51.259 } 00:04:51.259 ] 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "subsystem": "vfio_user_target", 00:04:51.259 "config": null 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "subsystem": "keyring", 00:04:51.259 "config": [] 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "subsystem": "iobuf", 00:04:51.259 "config": [ 00:04:51.259 { 00:04:51.259 "method": "iobuf_set_options", 00:04:51.259 "params": { 00:04:51.259 "small_pool_count": 8192, 00:04:51.259 "large_pool_count": 1024, 00:04:51.259 "small_bufsize": 8192, 00:04:51.259 "large_bufsize": 135168, 00:04:51.259 "enable_numa": false 00:04:51.259 } 00:04:51.259 } 00:04:51.259 ] 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "subsystem": "sock", 00:04:51.259 "config": [ 00:04:51.259 { 00:04:51.259 "method": "sock_set_default_impl", 00:04:51.259 "params": { 00:04:51.259 "impl_name": "posix" 00:04:51.259 } 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "method": "sock_impl_set_options", 00:04:51.259 "params": { 00:04:51.259 "impl_name": "ssl", 00:04:51.259 "recv_buf_size": 4096, 00:04:51.259 "send_buf_size": 4096, 00:04:51.259 "enable_recv_pipe": true, 00:04:51.259 "enable_quickack": false, 00:04:51.259 "enable_placement_id": 0, 00:04:51.259 "enable_zerocopy_send_server": true, 00:04:51.259 "enable_zerocopy_send_client": false, 00:04:51.259 "zerocopy_threshold": 0, 00:04:51.259 "tls_version": 0, 00:04:51.259 "enable_ktls": false 00:04:51.259 } 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "method": "sock_impl_set_options", 00:04:51.259 "params": { 00:04:51.259 "impl_name": "posix", 00:04:51.259 "recv_buf_size": 2097152, 00:04:51.259 "send_buf_size": 2097152, 00:04:51.259 "enable_recv_pipe": true, 00:04:51.259 "enable_quickack": false, 00:04:51.259 "enable_placement_id": 0, 00:04:51.259 "enable_zerocopy_send_server": true, 00:04:51.259 "enable_zerocopy_send_client": false, 00:04:51.259 "zerocopy_threshold": 0, 00:04:51.259 "tls_version": 0, 00:04:51.259 "enable_ktls": false 00:04:51.259 } 00:04:51.259 } 00:04:51.259 ] 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "subsystem": "vmd", 00:04:51.259 "config": [] 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "subsystem": "accel", 00:04:51.259 "config": [ 00:04:51.259 { 00:04:51.259 "method": "accel_set_options", 00:04:51.259 "params": { 00:04:51.259 "small_cache_size": 128, 00:04:51.259 "large_cache_size": 16, 00:04:51.259 "task_count": 2048, 00:04:51.259 "sequence_count": 2048, 00:04:51.259 "buf_count": 2048 00:04:51.259 } 00:04:51.259 } 00:04:51.259 ] 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "subsystem": "bdev", 00:04:51.259 "config": [ 00:04:51.259 { 00:04:51.259 "method": "bdev_set_options", 00:04:51.259 "params": { 00:04:51.259 "bdev_io_pool_size": 65535, 00:04:51.259 "bdev_io_cache_size": 256, 00:04:51.259 "bdev_auto_examine": true, 00:04:51.259 "iobuf_small_cache_size": 128, 00:04:51.259 "iobuf_large_cache_size": 16 00:04:51.259 } 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "method": "bdev_raid_set_options", 00:04:51.259 "params": { 00:04:51.259 "process_window_size_kb": 1024, 00:04:51.259 "process_max_bandwidth_mb_sec": 0 00:04:51.259 } 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "method": "bdev_iscsi_set_options", 00:04:51.259 "params": { 00:04:51.259 "timeout_sec": 30 00:04:51.259 } 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "method": "bdev_nvme_set_options", 00:04:51.259 "params": { 00:04:51.259 "action_on_timeout": "none", 00:04:51.259 "timeout_us": 0, 00:04:51.259 "timeout_admin_us": 0, 00:04:51.259 "keep_alive_timeout_ms": 10000, 00:04:51.259 "arbitration_burst": 0, 00:04:51.259 "low_priority_weight": 0, 00:04:51.259 "medium_priority_weight": 0, 00:04:51.259 "high_priority_weight": 0, 00:04:51.259 "nvme_adminq_poll_period_us": 10000, 00:04:51.259 "nvme_ioq_poll_period_us": 0, 00:04:51.259 "io_queue_requests": 0, 00:04:51.259 "delay_cmd_submit": true, 00:04:51.259 "transport_retry_count": 4, 00:04:51.259 "bdev_retry_count": 3, 00:04:51.259 "transport_ack_timeout": 0, 00:04:51.259 "ctrlr_loss_timeout_sec": 0, 00:04:51.259 "reconnect_delay_sec": 0, 00:04:51.259 "fast_io_fail_timeout_sec": 0, 00:04:51.259 "disable_auto_failback": false, 00:04:51.259 "generate_uuids": false, 00:04:51.259 "transport_tos": 0, 00:04:51.259 "nvme_error_stat": false, 00:04:51.259 "rdma_srq_size": 0, 00:04:51.259 "io_path_stat": false, 00:04:51.259 "allow_accel_sequence": false, 00:04:51.259 "rdma_max_cq_size": 0, 00:04:51.259 "rdma_cm_event_timeout_ms": 0, 00:04:51.259 "dhchap_digests": [ 00:04:51.259 "sha256", 00:04:51.259 "sha384", 00:04:51.259 "sha512" 00:04:51.259 ], 00:04:51.259 "dhchap_dhgroups": [ 00:04:51.259 "null", 00:04:51.259 "ffdhe2048", 00:04:51.259 "ffdhe3072", 00:04:51.259 "ffdhe4096", 00:04:51.259 "ffdhe6144", 00:04:51.259 "ffdhe8192" 00:04:51.259 ] 00:04:51.259 } 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "method": "bdev_nvme_set_hotplug", 00:04:51.259 "params": { 00:04:51.259 "period_us": 100000, 00:04:51.259 "enable": false 00:04:51.259 } 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "method": "bdev_wait_for_examine" 00:04:51.259 } 00:04:51.259 ] 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "subsystem": "scsi", 00:04:51.259 "config": null 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "subsystem": "scheduler", 00:04:51.259 "config": [ 00:04:51.259 { 00:04:51.259 "method": "framework_set_scheduler", 00:04:51.259 "params": { 00:04:51.259 "name": "static" 00:04:51.259 } 00:04:51.259 } 00:04:51.259 ] 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "subsystem": "vhost_scsi", 00:04:51.259 "config": [] 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "subsystem": "vhost_blk", 00:04:51.259 "config": [] 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "subsystem": "ublk", 00:04:51.259 "config": [] 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "subsystem": "nbd", 00:04:51.259 "config": [] 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "subsystem": "nvmf", 00:04:51.259 "config": [ 00:04:51.259 { 00:04:51.259 "method": "nvmf_set_config", 00:04:51.259 "params": { 00:04:51.259 "discovery_filter": "match_any", 00:04:51.259 "admin_cmd_passthru": { 00:04:51.259 "identify_ctrlr": false 00:04:51.259 }, 00:04:51.259 "dhchap_digests": [ 00:04:51.259 "sha256", 00:04:51.259 "sha384", 00:04:51.259 "sha512" 00:04:51.259 ], 00:04:51.259 "dhchap_dhgroups": [ 00:04:51.259 "null", 00:04:51.259 "ffdhe2048", 00:04:51.259 "ffdhe3072", 00:04:51.259 "ffdhe4096", 00:04:51.259 "ffdhe6144", 00:04:51.259 "ffdhe8192" 00:04:51.259 ] 00:04:51.259 } 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "method": "nvmf_set_max_subsystems", 00:04:51.259 "params": { 00:04:51.259 "max_subsystems": 1024 00:04:51.259 } 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "method": "nvmf_set_crdt", 00:04:51.259 "params": { 00:04:51.259 "crdt1": 0, 00:04:51.259 "crdt2": 0, 00:04:51.259 "crdt3": 0 00:04:51.259 } 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "method": "nvmf_create_transport", 00:04:51.259 "params": { 00:04:51.259 "trtype": "TCP", 00:04:51.259 "max_queue_depth": 128, 00:04:51.259 "max_io_qpairs_per_ctrlr": 127, 00:04:51.259 "in_capsule_data_size": 4096, 00:04:51.259 "max_io_size": 131072, 00:04:51.259 "io_unit_size": 131072, 00:04:51.259 "max_aq_depth": 128, 00:04:51.259 "num_shared_buffers": 511, 00:04:51.259 "buf_cache_size": 4294967295, 00:04:51.259 "dif_insert_or_strip": false, 00:04:51.259 "zcopy": false, 00:04:51.259 "c2h_success": true, 00:04:51.259 "sock_priority": 0, 00:04:51.259 "abort_timeout_sec": 1, 00:04:51.259 "ack_timeout": 0, 00:04:51.259 "data_wr_pool_size": 0 00:04:51.259 } 00:04:51.259 } 00:04:51.259 ] 00:04:51.259 }, 00:04:51.259 { 00:04:51.259 "subsystem": "iscsi", 00:04:51.259 "config": [ 00:04:51.259 { 00:04:51.259 "method": "iscsi_set_options", 00:04:51.259 "params": { 00:04:51.259 "node_base": "iqn.2016-06.io.spdk", 00:04:51.259 "max_sessions": 128, 00:04:51.259 "max_connections_per_session": 2, 00:04:51.259 "max_queue_depth": 64, 00:04:51.259 "default_time2wait": 2, 00:04:51.259 "default_time2retain": 20, 00:04:51.259 "first_burst_length": 8192, 00:04:51.259 "immediate_data": true, 00:04:51.259 "allow_duplicated_isid": false, 00:04:51.259 "error_recovery_level": 0, 00:04:51.260 "nop_timeout": 60, 00:04:51.260 "nop_in_interval": 30, 00:04:51.260 "disable_chap": false, 00:04:51.260 "require_chap": false, 00:04:51.260 "mutual_chap": false, 00:04:51.260 "chap_group": 0, 00:04:51.260 "max_large_datain_per_connection": 64, 00:04:51.260 "max_r2t_per_connection": 4, 00:04:51.260 "pdu_pool_size": 36864, 00:04:51.260 "immediate_data_pool_size": 16384, 00:04:51.260 "data_out_pool_size": 2048 00:04:51.260 } 00:04:51.260 } 00:04:51.260 ] 00:04:51.260 } 00:04:51.260 ] 00:04:51.260 } 00:04:51.260 14:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:51.260 14:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1438050 00:04:51.260 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1438050 ']' 00:04:51.260 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1438050 00:04:51.260 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:51.260 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.260 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1438050 00:04:51.260 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.260 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.260 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1438050' 00:04:51.260 killing process with pid 1438050 00:04:51.260 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1438050 00:04:51.260 14:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1438050 00:04:51.827 14:06:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1438281 00:04:51.827 14:06:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:51.827 14:06:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1438281 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1438281 ']' 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1438281 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1438281 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1438281' 00:04:57.105 killing process with pid 1438281 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1438281 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1438281 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:57.105 00:04:57.105 real 0m6.278s 00:04:57.105 user 0m5.972s 00:04:57.105 sys 0m0.606s 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.105 ************************************ 00:04:57.105 END TEST skip_rpc_with_json 00:04:57.105 ************************************ 00:04:57.105 14:06:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:57.105 14:06:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.105 14:06:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.105 14:06:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.105 ************************************ 00:04:57.105 START TEST skip_rpc_with_delay 00:04:57.105 ************************************ 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.105 [2024-12-10 14:06:57.761802] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:57.105 00:04:57.105 real 0m0.067s 00:04:57.105 user 0m0.040s 00:04:57.105 sys 0m0.027s 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.105 14:06:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:57.105 ************************************ 00:04:57.105 END TEST skip_rpc_with_delay 00:04:57.105 ************************************ 00:04:57.105 14:06:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:57.105 14:06:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:57.105 14:06:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:57.105 14:06:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.105 14:06:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.105 14:06:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.105 ************************************ 00:04:57.105 START TEST exit_on_failed_rpc_init 00:04:57.105 ************************************ 00:04:57.364 14:06:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:57.364 14:06:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1439240 00:04:57.364 14:06:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.364 14:06:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1439240 00:04:57.364 14:06:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1439240 ']' 00:04:57.364 14:06:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.364 14:06:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.364 14:06:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.364 14:06:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.364 14:06:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.364 [2024-12-10 14:06:57.887850] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:04:57.364 [2024-12-10 14:06:57.887886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439240 ] 00:04:57.364 [2024-12-10 14:06:57.967031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.364 [2024-12-10 14:06:58.005048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.623 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.623 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:57.624 14:06:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.624 14:06:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:57.624 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:57.624 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:57.624 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.624 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.624 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.624 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.624 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.624 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.624 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.624 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:57.624 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:57.624 [2024-12-10 14:06:58.283368] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:04:57.624 [2024-12-10 14:06:58.283408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439246 ] 00:04:57.624 [2024-12-10 14:06:58.360714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.883 [2024-12-10 14:06:58.400501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.883 [2024-12-10 14:06:58.400560] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:57.883 [2024-12-10 14:06:58.400569] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:57.883 [2024-12-10 14:06:58.400576] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1439240 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1439240 ']' 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1439240 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1439240 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1439240' 00:04:57.883 killing process with pid 1439240 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1439240 00:04:57.883 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1439240 00:04:58.142 00:04:58.142 real 0m0.956s 00:04:58.142 user 0m1.013s 00:04:58.142 sys 0m0.398s 00:04:58.142 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.142 14:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.142 ************************************ 00:04:58.142 END TEST exit_on_failed_rpc_init 00:04:58.142 ************************************ 00:04:58.142 14:06:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:58.142 00:04:58.142 real 0m13.111s 00:04:58.142 user 0m12.338s 00:04:58.142 sys 0m1.592s 00:04:58.142 14:06:58 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.142 14:06:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.142 ************************************ 00:04:58.142 END TEST skip_rpc 00:04:58.142 ************************************ 00:04:58.142 14:06:58 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:58.142 14:06:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.142 14:06:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.142 14:06:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.401 ************************************ 00:04:58.401 START TEST rpc_client 00:04:58.401 ************************************ 00:04:58.401 14:06:58 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:58.401 * Looking for test storage... 00:04:58.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:58.401 14:06:58 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.401 14:06:58 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.401 14:06:58 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.401 14:06:59 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.401 14:06:59 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:58.401 14:06:59 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.401 14:06:59 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.401 --rc genhtml_branch_coverage=1 00:04:58.401 --rc genhtml_function_coverage=1 00:04:58.401 --rc genhtml_legend=1 00:04:58.401 --rc geninfo_all_blocks=1 00:04:58.401 --rc geninfo_unexecuted_blocks=1 00:04:58.401 00:04:58.401 ' 00:04:58.401 14:06:59 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.401 --rc genhtml_branch_coverage=1 00:04:58.401 --rc genhtml_function_coverage=1 00:04:58.401 --rc genhtml_legend=1 00:04:58.401 --rc geninfo_all_blocks=1 00:04:58.401 --rc geninfo_unexecuted_blocks=1 00:04:58.401 00:04:58.401 ' 00:04:58.401 14:06:59 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.401 --rc genhtml_branch_coverage=1 00:04:58.401 --rc genhtml_function_coverage=1 00:04:58.401 --rc genhtml_legend=1 00:04:58.401 --rc geninfo_all_blocks=1 00:04:58.401 --rc geninfo_unexecuted_blocks=1 00:04:58.401 00:04:58.401 ' 00:04:58.401 14:06:59 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.401 --rc genhtml_branch_coverage=1 00:04:58.401 --rc genhtml_function_coverage=1 00:04:58.401 --rc genhtml_legend=1 00:04:58.401 --rc geninfo_all_blocks=1 00:04:58.401 --rc geninfo_unexecuted_blocks=1 00:04:58.401 00:04:58.401 ' 00:04:58.401 14:06:59 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:58.401 OK 00:04:58.401 14:06:59 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:58.401 00:04:58.401 real 0m0.197s 00:04:58.401 user 0m0.122s 00:04:58.401 sys 0m0.087s 00:04:58.401 14:06:59 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.401 14:06:59 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:58.401 ************************************ 00:04:58.401 END TEST rpc_client 00:04:58.402 ************************************ 00:04:58.402 14:06:59 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:58.402 14:06:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.402 14:06:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.402 14:06:59 -- common/autotest_common.sh@10 -- # set +x 00:04:58.662 ************************************ 00:04:58.662 START TEST json_config 00:04:58.662 ************************************ 00:04:58.662 14:06:59 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:58.662 14:06:59 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.662 14:06:59 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.662 14:06:59 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.662 14:06:59 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.662 14:06:59 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.662 14:06:59 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.662 14:06:59 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.662 14:06:59 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.662 14:06:59 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.662 14:06:59 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.662 14:06:59 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.662 14:06:59 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.662 14:06:59 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.662 14:06:59 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.662 14:06:59 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.662 14:06:59 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:58.662 14:06:59 json_config -- scripts/common.sh@345 -- # : 1 00:04:58.662 14:06:59 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.662 14:06:59 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.662 14:06:59 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:58.662 14:06:59 json_config -- scripts/common.sh@353 -- # local d=1 00:04:58.662 14:06:59 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.662 14:06:59 json_config -- scripts/common.sh@355 -- # echo 1 00:04:58.662 14:06:59 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.662 14:06:59 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:58.662 14:06:59 json_config -- scripts/common.sh@353 -- # local d=2 00:04:58.662 14:06:59 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.662 14:06:59 json_config -- scripts/common.sh@355 -- # echo 2 00:04:58.662 14:06:59 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.662 14:06:59 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.662 14:06:59 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.662 14:06:59 json_config -- scripts/common.sh@368 -- # return 0 00:04:58.662 14:06:59 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.662 14:06:59 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.662 --rc genhtml_branch_coverage=1 00:04:58.662 --rc genhtml_function_coverage=1 00:04:58.662 --rc genhtml_legend=1 00:04:58.662 --rc geninfo_all_blocks=1 00:04:58.662 --rc geninfo_unexecuted_blocks=1 00:04:58.662 00:04:58.662 ' 00:04:58.662 14:06:59 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.662 --rc genhtml_branch_coverage=1 00:04:58.662 --rc genhtml_function_coverage=1 00:04:58.662 --rc genhtml_legend=1 00:04:58.662 --rc geninfo_all_blocks=1 00:04:58.662 --rc geninfo_unexecuted_blocks=1 00:04:58.662 00:04:58.662 ' 00:04:58.662 14:06:59 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.662 --rc genhtml_branch_coverage=1 00:04:58.662 --rc genhtml_function_coverage=1 00:04:58.662 --rc genhtml_legend=1 00:04:58.662 --rc geninfo_all_blocks=1 00:04:58.662 --rc geninfo_unexecuted_blocks=1 00:04:58.662 00:04:58.662 ' 00:04:58.662 14:06:59 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.662 --rc genhtml_branch_coverage=1 00:04:58.662 --rc genhtml_function_coverage=1 00:04:58.662 --rc genhtml_legend=1 00:04:58.662 --rc geninfo_all_blocks=1 00:04:58.662 --rc geninfo_unexecuted_blocks=1 00:04:58.662 00:04:58.662 ' 00:04:58.662 14:06:59 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:58.662 14:06:59 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.662 14:06:59 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.662 14:06:59 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.662 14:06:59 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.662 14:06:59 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.662 14:06:59 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.662 14:06:59 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.662 14:06:59 json_config -- paths/export.sh@5 -- # export PATH 00:04:58.662 14:06:59 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@51 -- # : 0 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.662 14:06:59 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.662 14:06:59 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:58.662 14:06:59 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:58.663 INFO: JSON configuration test init 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:58.663 14:06:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.663 14:06:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:58.663 14:06:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.663 14:06:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.663 14:06:59 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:58.663 14:06:59 json_config -- json_config/common.sh@9 -- # local app=target 00:04:58.663 14:06:59 json_config -- json_config/common.sh@10 -- # shift 00:04:58.663 14:06:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:58.663 14:06:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:58.663 14:06:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:58.663 14:06:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.663 14:06:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.663 14:06:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1439595 00:04:58.663 14:06:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:58.663 Waiting for target to run... 00:04:58.663 14:06:59 json_config -- json_config/common.sh@25 -- # waitforlisten 1439595 /var/tmp/spdk_tgt.sock 00:04:58.663 14:06:59 json_config -- common/autotest_common.sh@835 -- # '[' -z 1439595 ']' 00:04:58.663 14:06:59 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.663 14:06:59 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:58.663 14:06:59 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.663 14:06:59 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.663 14:06:59 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.663 14:06:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.922 [2024-12-10 14:06:59.423263] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:04:58.922 [2024-12-10 14:06:59.423310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439595 ] 00:04:59.180 [2024-12-10 14:06:59.717725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.180 [2024-12-10 14:06:59.750246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.859 14:07:00 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.859 14:07:00 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:59.859 14:07:00 json_config -- json_config/common.sh@26 -- # echo '' 00:04:59.859 00:04:59.859 14:07:00 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:59.859 14:07:00 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:59.859 14:07:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.859 14:07:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.859 14:07:00 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:59.859 14:07:00 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:59.859 14:07:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:59.859 14:07:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.859 14:07:00 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:59.859 14:07:00 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:59.859 14:07:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:03.179 14:07:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.179 14:07:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:03.179 14:07:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@54 -- # sort 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:03.179 14:07:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:03.179 14:07:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:03.179 14:07:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.179 14:07:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:03.179 14:07:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:03.179 MallocForNvmf0 00:05:03.179 14:07:03 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:03.179 14:07:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:03.438 MallocForNvmf1 00:05:03.438 14:07:04 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:03.438 14:07:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:03.697 [2024-12-10 14:07:04.225213] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.697 14:07:04 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:03.697 14:07:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:03.955 14:07:04 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:03.955 14:07:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:03.955 14:07:04 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:03.955 14:07:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:04.214 14:07:04 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:04.214 14:07:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:04.473 [2024-12-10 14:07:04.999520] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:04.473 14:07:05 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:04.473 14:07:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.473 14:07:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.473 14:07:05 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:04.473 14:07:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.473 14:07:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.473 14:07:05 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:04.473 14:07:05 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:04.473 14:07:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:04.731 MallocBdevForConfigChangeCheck 00:05:04.731 14:07:05 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:04.731 14:07:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.731 14:07:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.731 14:07:05 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:04.731 14:07:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.989 14:07:05 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:04.989 INFO: shutting down applications... 00:05:04.989 14:07:05 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:04.989 14:07:05 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:04.989 14:07:05 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:04.989 14:07:05 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:06.889 Calling clear_iscsi_subsystem 00:05:06.889 Calling clear_nvmf_subsystem 00:05:06.889 Calling clear_nbd_subsystem 00:05:06.889 Calling clear_ublk_subsystem 00:05:06.889 Calling clear_vhost_blk_subsystem 00:05:06.889 Calling clear_vhost_scsi_subsystem 00:05:06.889 Calling clear_bdev_subsystem 00:05:06.889 14:07:07 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:06.889 14:07:07 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:06.889 14:07:07 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:06.889 14:07:07 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.889 14:07:07 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:06.889 14:07:07 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:07.148 14:07:07 json_config -- json_config/json_config.sh@352 -- # break 00:05:07.148 14:07:07 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:07.148 14:07:07 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:07.148 14:07:07 json_config -- json_config/common.sh@31 -- # local app=target 00:05:07.148 14:07:07 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:07.148 14:07:07 json_config -- json_config/common.sh@35 -- # [[ -n 1439595 ]] 00:05:07.148 14:07:07 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1439595 00:05:07.148 14:07:07 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:07.148 14:07:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.148 14:07:07 json_config -- json_config/common.sh@41 -- # kill -0 1439595 00:05:07.148 14:07:07 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.716 14:07:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.716 14:07:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.716 14:07:08 json_config -- json_config/common.sh@41 -- # kill -0 1439595 00:05:07.716 14:07:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:07.716 14:07:08 json_config -- json_config/common.sh@43 -- # break 00:05:07.716 14:07:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:07.716 14:07:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:07.716 SPDK target shutdown done 00:05:07.716 14:07:08 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:07.716 INFO: relaunching applications... 00:05:07.716 14:07:08 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.716 14:07:08 json_config -- json_config/common.sh@9 -- # local app=target 00:05:07.716 14:07:08 json_config -- json_config/common.sh@10 -- # shift 00:05:07.716 14:07:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.716 14:07:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.716 14:07:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.716 14:07:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.716 14:07:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.716 14:07:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1441289 00:05:07.716 14:07:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.716 Waiting for target to run... 00:05:07.716 14:07:08 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.716 14:07:08 json_config -- json_config/common.sh@25 -- # waitforlisten 1441289 /var/tmp/spdk_tgt.sock 00:05:07.716 14:07:08 json_config -- common/autotest_common.sh@835 -- # '[' -z 1441289 ']' 00:05:07.716 14:07:08 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.716 14:07:08 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.716 14:07:08 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.716 14:07:08 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.716 14:07:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.716 [2024-12-10 14:07:08.249546] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:07.716 [2024-12-10 14:07:08.249605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1441289 ] 00:05:07.975 [2024-12-10 14:07:08.540893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.975 [2024-12-10 14:07:08.573194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.261 [2024-12-10 14:07:11.607011] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.261 [2024-12-10 14:07:11.639308] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:11.261 14:07:11 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.261 14:07:11 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:11.261 14:07:11 json_config -- json_config/common.sh@26 -- # echo '' 00:05:11.261 00:05:11.261 14:07:11 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:11.261 14:07:11 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:11.261 INFO: Checking if target configuration is the same... 00:05:11.261 14:07:11 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.261 14:07:11 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:11.261 14:07:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.261 + '[' 2 -ne 2 ']' 00:05:11.261 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:11.261 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:11.261 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:11.261 +++ basename /dev/fd/62 00:05:11.261 ++ mktemp /tmp/62.XXX 00:05:11.261 + tmp_file_1=/tmp/62.vLO 00:05:11.261 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.261 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:11.261 + tmp_file_2=/tmp/spdk_tgt_config.json.phx 00:05:11.261 + ret=0 00:05:11.261 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:11.519 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:11.519 + diff -u /tmp/62.vLO /tmp/spdk_tgt_config.json.phx 00:05:11.519 + echo 'INFO: JSON config files are the same' 00:05:11.519 INFO: JSON config files are the same 00:05:11.519 + rm /tmp/62.vLO /tmp/spdk_tgt_config.json.phx 00:05:11.519 + exit 0 00:05:11.519 14:07:12 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:11.519 14:07:12 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:11.519 INFO: changing configuration and checking if this can be detected... 00:05:11.519 14:07:12 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:11.519 14:07:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:11.778 14:07:12 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.778 14:07:12 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:11.778 14:07:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.778 + '[' 2 -ne 2 ']' 00:05:11.778 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:11.778 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:11.778 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:11.778 +++ basename /dev/fd/62 00:05:11.778 ++ mktemp /tmp/62.XXX 00:05:11.778 + tmp_file_1=/tmp/62.M07 00:05:11.778 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.778 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:11.778 + tmp_file_2=/tmp/spdk_tgt_config.json.41B 00:05:11.778 + ret=0 00:05:11.778 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.051 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.051 + diff -u /tmp/62.M07 /tmp/spdk_tgt_config.json.41B 00:05:12.051 + ret=1 00:05:12.051 + echo '=== Start of file: /tmp/62.M07 ===' 00:05:12.051 + cat /tmp/62.M07 00:05:12.051 + echo '=== End of file: /tmp/62.M07 ===' 00:05:12.051 + echo '' 00:05:12.051 + echo '=== Start of file: /tmp/spdk_tgt_config.json.41B ===' 00:05:12.051 + cat /tmp/spdk_tgt_config.json.41B 00:05:12.051 + echo '=== End of file: /tmp/spdk_tgt_config.json.41B ===' 00:05:12.051 + echo '' 00:05:12.051 + rm /tmp/62.M07 /tmp/spdk_tgt_config.json.41B 00:05:12.051 + exit 1 00:05:12.051 14:07:12 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:12.051 INFO: configuration change detected. 00:05:12.051 14:07:12 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:12.051 14:07:12 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:12.051 14:07:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.051 14:07:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.051 14:07:12 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:12.051 14:07:12 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:12.051 14:07:12 json_config -- json_config/json_config.sh@324 -- # [[ -n 1441289 ]] 00:05:12.051 14:07:12 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:12.051 14:07:12 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:12.051 14:07:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.051 14:07:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.051 14:07:12 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:12.051 14:07:12 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:12.051 14:07:12 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:12.051 14:07:12 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:12.051 14:07:12 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:12.051 14:07:12 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:12.051 14:07:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.051 14:07:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.051 14:07:12 json_config -- json_config/json_config.sh@330 -- # killprocess 1441289 00:05:12.051 14:07:12 json_config -- common/autotest_common.sh@954 -- # '[' -z 1441289 ']' 00:05:12.051 14:07:12 json_config -- common/autotest_common.sh@958 -- # kill -0 1441289 00:05:12.051 14:07:12 json_config -- common/autotest_common.sh@959 -- # uname 00:05:12.051 14:07:12 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.051 14:07:12 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1441289 00:05:12.051 14:07:12 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.051 14:07:12 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.051 14:07:12 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1441289' 00:05:12.051 killing process with pid 1441289 00:05:12.051 14:07:12 json_config -- common/autotest_common.sh@973 -- # kill 1441289 00:05:12.051 14:07:12 json_config -- common/autotest_common.sh@978 -- # wait 1441289 00:05:13.954 14:07:14 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.954 14:07:14 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:13.954 14:07:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.954 14:07:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.954 14:07:14 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:13.955 14:07:14 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:13.955 INFO: Success 00:05:13.955 00:05:13.955 real 0m15.138s 00:05:13.955 user 0m15.745s 00:05:13.955 sys 0m2.405s 00:05:13.955 14:07:14 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.955 14:07:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.955 ************************************ 00:05:13.955 END TEST json_config 00:05:13.955 ************************************ 00:05:13.955 14:07:14 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:13.955 14:07:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.955 14:07:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.955 14:07:14 -- common/autotest_common.sh@10 -- # set +x 00:05:13.955 ************************************ 00:05:13.955 START TEST json_config_extra_key 00:05:13.955 ************************************ 00:05:13.955 14:07:14 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:13.955 14:07:14 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:13.955 14:07:14 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:13.955 14:07:14 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:13.955 14:07:14 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:13.955 14:07:14 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.955 14:07:14 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:13.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.955 --rc genhtml_branch_coverage=1 00:05:13.955 --rc genhtml_function_coverage=1 00:05:13.955 --rc genhtml_legend=1 00:05:13.955 --rc geninfo_all_blocks=1 00:05:13.955 --rc geninfo_unexecuted_blocks=1 00:05:13.955 00:05:13.955 ' 00:05:13.955 14:07:14 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:13.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.955 --rc genhtml_branch_coverage=1 00:05:13.955 --rc genhtml_function_coverage=1 00:05:13.955 --rc genhtml_legend=1 00:05:13.955 --rc geninfo_all_blocks=1 00:05:13.955 --rc geninfo_unexecuted_blocks=1 00:05:13.955 00:05:13.955 ' 00:05:13.955 14:07:14 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:13.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.955 --rc genhtml_branch_coverage=1 00:05:13.955 --rc genhtml_function_coverage=1 00:05:13.955 --rc genhtml_legend=1 00:05:13.955 --rc geninfo_all_blocks=1 00:05:13.955 --rc geninfo_unexecuted_blocks=1 00:05:13.955 00:05:13.955 ' 00:05:13.955 14:07:14 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:13.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.955 --rc genhtml_branch_coverage=1 00:05:13.955 --rc genhtml_function_coverage=1 00:05:13.955 --rc genhtml_legend=1 00:05:13.955 --rc geninfo_all_blocks=1 00:05:13.955 --rc geninfo_unexecuted_blocks=1 00:05:13.955 00:05:13.955 ' 00:05:13.955 14:07:14 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.955 14:07:14 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.955 14:07:14 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.955 14:07:14 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.955 14:07:14 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.955 14:07:14 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:13.955 14:07:14 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.955 14:07:14 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.955 14:07:14 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:13.955 14:07:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:13.955 14:07:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:13.955 14:07:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:13.955 14:07:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:13.955 14:07:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:13.955 14:07:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:13.956 14:07:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:13.956 14:07:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:13.956 14:07:14 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:13.956 14:07:14 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:13.956 INFO: launching applications... 00:05:13.956 14:07:14 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:13.956 14:07:14 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:13.956 14:07:14 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:13.956 14:07:14 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.956 14:07:14 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.956 14:07:14 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.956 14:07:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.956 14:07:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.956 14:07:14 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1442354 00:05:13.956 14:07:14 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.956 Waiting for target to run... 00:05:13.956 14:07:14 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1442354 /var/tmp/spdk_tgt.sock 00:05:13.956 14:07:14 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1442354 ']' 00:05:13.956 14:07:14 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:13.956 14:07:14 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.956 14:07:14 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.956 14:07:14 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.956 14:07:14 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.956 14:07:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:13.956 [2024-12-10 14:07:14.618609] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:13.956 [2024-12-10 14:07:14.618658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442354 ] 00:05:14.523 [2024-12-10 14:07:15.085447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.523 [2024-12-10 14:07:15.141566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.781 14:07:15 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.781 14:07:15 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:14.781 14:07:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:14.781 00:05:14.781 14:07:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:14.781 INFO: shutting down applications... 00:05:14.781 14:07:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:14.781 14:07:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:14.781 14:07:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:14.781 14:07:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1442354 ]] 00:05:14.781 14:07:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1442354 00:05:14.781 14:07:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:14.781 14:07:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.781 14:07:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1442354 00:05:14.781 14:07:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.349 14:07:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.349 14:07:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.349 14:07:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1442354 00:05:15.349 14:07:15 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:15.349 14:07:15 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:15.349 14:07:15 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:15.349 14:07:15 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:15.349 SPDK target shutdown done 00:05:15.349 14:07:15 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:15.349 Success 00:05:15.349 00:05:15.349 real 0m1.575s 00:05:15.349 user 0m1.169s 00:05:15.349 sys 0m0.582s 00:05:15.349 14:07:15 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.349 14:07:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:15.349 ************************************ 00:05:15.349 END TEST json_config_extra_key 00:05:15.349 ************************************ 00:05:15.349 14:07:15 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:15.349 14:07:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.349 14:07:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.349 14:07:15 -- common/autotest_common.sh@10 -- # set +x 00:05:15.349 ************************************ 00:05:15.349 START TEST alias_rpc 00:05:15.349 ************************************ 00:05:15.349 14:07:16 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:15.608 * Looking for test storage... 00:05:15.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:15.608 14:07:16 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:15.608 14:07:16 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:15.608 14:07:16 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:15.608 14:07:16 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:15.608 14:07:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.609 14:07:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.609 14:07:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.609 14:07:16 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:15.609 14:07:16 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.609 14:07:16 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:15.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.609 --rc genhtml_branch_coverage=1 00:05:15.609 --rc genhtml_function_coverage=1 00:05:15.609 --rc genhtml_legend=1 00:05:15.609 --rc geninfo_all_blocks=1 00:05:15.609 --rc geninfo_unexecuted_blocks=1 00:05:15.609 00:05:15.609 ' 00:05:15.609 14:07:16 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:15.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.609 --rc genhtml_branch_coverage=1 00:05:15.609 --rc genhtml_function_coverage=1 00:05:15.609 --rc genhtml_legend=1 00:05:15.609 --rc geninfo_all_blocks=1 00:05:15.609 --rc geninfo_unexecuted_blocks=1 00:05:15.609 00:05:15.609 ' 00:05:15.609 14:07:16 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:15.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.609 --rc genhtml_branch_coverage=1 00:05:15.609 --rc genhtml_function_coverage=1 00:05:15.609 --rc genhtml_legend=1 00:05:15.609 --rc geninfo_all_blocks=1 00:05:15.609 --rc geninfo_unexecuted_blocks=1 00:05:15.609 00:05:15.609 ' 00:05:15.609 14:07:16 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:15.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.609 --rc genhtml_branch_coverage=1 00:05:15.609 --rc genhtml_function_coverage=1 00:05:15.609 --rc genhtml_legend=1 00:05:15.609 --rc geninfo_all_blocks=1 00:05:15.609 --rc geninfo_unexecuted_blocks=1 00:05:15.609 00:05:15.609 ' 00:05:15.609 14:07:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:15.609 14:07:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1442705 00:05:15.609 14:07:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.609 14:07:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1442705 00:05:15.609 14:07:16 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1442705 ']' 00:05:15.609 14:07:16 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.609 14:07:16 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.609 14:07:16 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.609 14:07:16 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.609 14:07:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.609 [2024-12-10 14:07:16.252500] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:15.609 [2024-12-10 14:07:16.252552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442705 ] 00:05:15.609 [2024-12-10 14:07:16.332325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.868 [2024-12-10 14:07:16.373825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.868 14:07:16 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.868 14:07:16 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:15.868 14:07:16 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:16.126 14:07:16 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1442705 00:05:16.126 14:07:16 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1442705 ']' 00:05:16.126 14:07:16 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1442705 00:05:16.126 14:07:16 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:16.126 14:07:16 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.126 14:07:16 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1442705 00:05:16.126 14:07:16 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.126 14:07:16 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.126 14:07:16 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1442705' 00:05:16.126 killing process with pid 1442705 00:05:16.126 14:07:16 alias_rpc -- common/autotest_common.sh@973 -- # kill 1442705 00:05:16.126 14:07:16 alias_rpc -- common/autotest_common.sh@978 -- # wait 1442705 00:05:16.695 00:05:16.695 real 0m1.141s 00:05:16.695 user 0m1.170s 00:05:16.695 sys 0m0.397s 00:05:16.695 14:07:17 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.695 14:07:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.695 ************************************ 00:05:16.695 END TEST alias_rpc 00:05:16.695 ************************************ 00:05:16.695 14:07:17 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:16.695 14:07:17 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:16.695 14:07:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.695 14:07:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.695 14:07:17 -- common/autotest_common.sh@10 -- # set +x 00:05:16.695 ************************************ 00:05:16.695 START TEST spdkcli_tcp 00:05:16.695 ************************************ 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:16.695 * Looking for test storage... 00:05:16.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.695 14:07:17 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:16.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.695 --rc genhtml_branch_coverage=1 00:05:16.695 --rc genhtml_function_coverage=1 00:05:16.695 --rc genhtml_legend=1 00:05:16.695 --rc geninfo_all_blocks=1 00:05:16.695 --rc geninfo_unexecuted_blocks=1 00:05:16.695 00:05:16.695 ' 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:16.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.695 --rc genhtml_branch_coverage=1 00:05:16.695 --rc genhtml_function_coverage=1 00:05:16.695 --rc genhtml_legend=1 00:05:16.695 --rc geninfo_all_blocks=1 00:05:16.695 --rc geninfo_unexecuted_blocks=1 00:05:16.695 00:05:16.695 ' 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:16.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.695 --rc genhtml_branch_coverage=1 00:05:16.695 --rc genhtml_function_coverage=1 00:05:16.695 --rc genhtml_legend=1 00:05:16.695 --rc geninfo_all_blocks=1 00:05:16.695 --rc geninfo_unexecuted_blocks=1 00:05:16.695 00:05:16.695 ' 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:16.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.695 --rc genhtml_branch_coverage=1 00:05:16.695 --rc genhtml_function_coverage=1 00:05:16.695 --rc genhtml_legend=1 00:05:16.695 --rc geninfo_all_blocks=1 00:05:16.695 --rc geninfo_unexecuted_blocks=1 00:05:16.695 00:05:16.695 ' 00:05:16.695 14:07:17 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:16.695 14:07:17 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:16.695 14:07:17 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:16.695 14:07:17 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:16.695 14:07:17 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:16.695 14:07:17 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:16.695 14:07:17 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.695 14:07:17 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1442934 00:05:16.695 14:07:17 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:16.695 14:07:17 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1442934 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1442934 ']' 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.695 14:07:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.954 [2024-12-10 14:07:17.462286] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:16.954 [2024-12-10 14:07:17.462334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442934 ] 00:05:16.954 [2024-12-10 14:07:17.542493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.954 [2024-12-10 14:07:17.582550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.954 [2024-12-10 14:07:17.582551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.212 14:07:17 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.212 14:07:17 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:17.212 14:07:17 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1443148 00:05:17.212 14:07:17 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:17.212 14:07:17 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:17.472 [ 00:05:17.472 "bdev_malloc_delete", 00:05:17.472 "bdev_malloc_create", 00:05:17.472 "bdev_null_resize", 00:05:17.472 "bdev_null_delete", 00:05:17.472 "bdev_null_create", 00:05:17.472 "bdev_nvme_cuse_unregister", 00:05:17.472 "bdev_nvme_cuse_register", 00:05:17.472 "bdev_opal_new_user", 00:05:17.472 "bdev_opal_set_lock_state", 00:05:17.472 "bdev_opal_delete", 00:05:17.472 "bdev_opal_get_info", 00:05:17.472 "bdev_opal_create", 00:05:17.472 "bdev_nvme_opal_revert", 00:05:17.472 "bdev_nvme_opal_init", 00:05:17.472 "bdev_nvme_send_cmd", 00:05:17.472 "bdev_nvme_set_keys", 00:05:17.472 "bdev_nvme_get_path_iostat", 00:05:17.472 "bdev_nvme_get_mdns_discovery_info", 00:05:17.472 "bdev_nvme_stop_mdns_discovery", 00:05:17.472 "bdev_nvme_start_mdns_discovery", 00:05:17.472 "bdev_nvme_set_multipath_policy", 00:05:17.472 "bdev_nvme_set_preferred_path", 00:05:17.472 "bdev_nvme_get_io_paths", 00:05:17.472 "bdev_nvme_remove_error_injection", 00:05:17.472 "bdev_nvme_add_error_injection", 00:05:17.472 "bdev_nvme_get_discovery_info", 00:05:17.472 "bdev_nvme_stop_discovery", 00:05:17.472 "bdev_nvme_start_discovery", 00:05:17.472 "bdev_nvme_get_controller_health_info", 00:05:17.472 "bdev_nvme_disable_controller", 00:05:17.472 "bdev_nvme_enable_controller", 00:05:17.472 "bdev_nvme_reset_controller", 00:05:17.472 "bdev_nvme_get_transport_statistics", 00:05:17.472 "bdev_nvme_apply_firmware", 00:05:17.472 "bdev_nvme_detach_controller", 00:05:17.472 "bdev_nvme_get_controllers", 00:05:17.472 "bdev_nvme_attach_controller", 00:05:17.472 "bdev_nvme_set_hotplug", 00:05:17.472 "bdev_nvme_set_options", 00:05:17.472 "bdev_passthru_delete", 00:05:17.472 "bdev_passthru_create", 00:05:17.472 "bdev_lvol_set_parent_bdev", 00:05:17.472 "bdev_lvol_set_parent", 00:05:17.472 "bdev_lvol_check_shallow_copy", 00:05:17.472 "bdev_lvol_start_shallow_copy", 00:05:17.472 "bdev_lvol_grow_lvstore", 00:05:17.472 "bdev_lvol_get_lvols", 00:05:17.472 "bdev_lvol_get_lvstores", 00:05:17.472 "bdev_lvol_delete", 00:05:17.472 "bdev_lvol_set_read_only", 00:05:17.472 "bdev_lvol_resize", 00:05:17.472 "bdev_lvol_decouple_parent", 00:05:17.472 "bdev_lvol_inflate", 00:05:17.472 "bdev_lvol_rename", 00:05:17.472 "bdev_lvol_clone_bdev", 00:05:17.472 "bdev_lvol_clone", 00:05:17.472 "bdev_lvol_snapshot", 00:05:17.472 "bdev_lvol_create", 00:05:17.472 "bdev_lvol_delete_lvstore", 00:05:17.472 "bdev_lvol_rename_lvstore", 00:05:17.472 "bdev_lvol_create_lvstore", 00:05:17.472 "bdev_raid_set_options", 00:05:17.472 "bdev_raid_remove_base_bdev", 00:05:17.472 "bdev_raid_add_base_bdev", 00:05:17.472 "bdev_raid_delete", 00:05:17.472 "bdev_raid_create", 00:05:17.472 "bdev_raid_get_bdevs", 00:05:17.472 "bdev_error_inject_error", 00:05:17.472 "bdev_error_delete", 00:05:17.472 "bdev_error_create", 00:05:17.472 "bdev_split_delete", 00:05:17.472 "bdev_split_create", 00:05:17.472 "bdev_delay_delete", 00:05:17.472 "bdev_delay_create", 00:05:17.472 "bdev_delay_update_latency", 00:05:17.472 "bdev_zone_block_delete", 00:05:17.472 "bdev_zone_block_create", 00:05:17.472 "blobfs_create", 00:05:17.472 "blobfs_detect", 00:05:17.472 "blobfs_set_cache_size", 00:05:17.472 "bdev_aio_delete", 00:05:17.472 "bdev_aio_rescan", 00:05:17.472 "bdev_aio_create", 00:05:17.472 "bdev_ftl_set_property", 00:05:17.472 "bdev_ftl_get_properties", 00:05:17.472 "bdev_ftl_get_stats", 00:05:17.472 "bdev_ftl_unmap", 00:05:17.472 "bdev_ftl_unload", 00:05:17.472 "bdev_ftl_delete", 00:05:17.472 "bdev_ftl_load", 00:05:17.472 "bdev_ftl_create", 00:05:17.472 "bdev_virtio_attach_controller", 00:05:17.472 "bdev_virtio_scsi_get_devices", 00:05:17.472 "bdev_virtio_detach_controller", 00:05:17.472 "bdev_virtio_blk_set_hotplug", 00:05:17.472 "bdev_iscsi_delete", 00:05:17.472 "bdev_iscsi_create", 00:05:17.472 "bdev_iscsi_set_options", 00:05:17.472 "accel_error_inject_error", 00:05:17.472 "ioat_scan_accel_module", 00:05:17.472 "dsa_scan_accel_module", 00:05:17.472 "iaa_scan_accel_module", 00:05:17.472 "vfu_virtio_create_fs_endpoint", 00:05:17.472 "vfu_virtio_create_scsi_endpoint", 00:05:17.472 "vfu_virtio_scsi_remove_target", 00:05:17.472 "vfu_virtio_scsi_add_target", 00:05:17.472 "vfu_virtio_create_blk_endpoint", 00:05:17.472 "vfu_virtio_delete_endpoint", 00:05:17.472 "keyring_file_remove_key", 00:05:17.472 "keyring_file_add_key", 00:05:17.472 "keyring_linux_set_options", 00:05:17.472 "fsdev_aio_delete", 00:05:17.472 "fsdev_aio_create", 00:05:17.472 "iscsi_get_histogram", 00:05:17.472 "iscsi_enable_histogram", 00:05:17.472 "iscsi_set_options", 00:05:17.472 "iscsi_get_auth_groups", 00:05:17.472 "iscsi_auth_group_remove_secret", 00:05:17.472 "iscsi_auth_group_add_secret", 00:05:17.472 "iscsi_delete_auth_group", 00:05:17.472 "iscsi_create_auth_group", 00:05:17.472 "iscsi_set_discovery_auth", 00:05:17.472 "iscsi_get_options", 00:05:17.472 "iscsi_target_node_request_logout", 00:05:17.472 "iscsi_target_node_set_redirect", 00:05:17.472 "iscsi_target_node_set_auth", 00:05:17.472 "iscsi_target_node_add_lun", 00:05:17.472 "iscsi_get_stats", 00:05:17.472 "iscsi_get_connections", 00:05:17.472 "iscsi_portal_group_set_auth", 00:05:17.472 "iscsi_start_portal_group", 00:05:17.472 "iscsi_delete_portal_group", 00:05:17.472 "iscsi_create_portal_group", 00:05:17.472 "iscsi_get_portal_groups", 00:05:17.472 "iscsi_delete_target_node", 00:05:17.472 "iscsi_target_node_remove_pg_ig_maps", 00:05:17.472 "iscsi_target_node_add_pg_ig_maps", 00:05:17.472 "iscsi_create_target_node", 00:05:17.472 "iscsi_get_target_nodes", 00:05:17.472 "iscsi_delete_initiator_group", 00:05:17.472 "iscsi_initiator_group_remove_initiators", 00:05:17.472 "iscsi_initiator_group_add_initiators", 00:05:17.472 "iscsi_create_initiator_group", 00:05:17.472 "iscsi_get_initiator_groups", 00:05:17.472 "nvmf_set_crdt", 00:05:17.472 "nvmf_set_config", 00:05:17.472 "nvmf_set_max_subsystems", 00:05:17.472 "nvmf_stop_mdns_prr", 00:05:17.472 "nvmf_publish_mdns_prr", 00:05:17.472 "nvmf_subsystem_get_listeners", 00:05:17.472 "nvmf_subsystem_get_qpairs", 00:05:17.472 "nvmf_subsystem_get_controllers", 00:05:17.472 "nvmf_get_stats", 00:05:17.472 "nvmf_get_transports", 00:05:17.472 "nvmf_create_transport", 00:05:17.472 "nvmf_get_targets", 00:05:17.472 "nvmf_delete_target", 00:05:17.472 "nvmf_create_target", 00:05:17.472 "nvmf_subsystem_allow_any_host", 00:05:17.472 "nvmf_subsystem_set_keys", 00:05:17.472 "nvmf_subsystem_remove_host", 00:05:17.472 "nvmf_subsystem_add_host", 00:05:17.472 "nvmf_ns_remove_host", 00:05:17.472 "nvmf_ns_add_host", 00:05:17.472 "nvmf_subsystem_remove_ns", 00:05:17.472 "nvmf_subsystem_set_ns_ana_group", 00:05:17.472 "nvmf_subsystem_add_ns", 00:05:17.472 "nvmf_subsystem_listener_set_ana_state", 00:05:17.472 "nvmf_discovery_get_referrals", 00:05:17.472 "nvmf_discovery_remove_referral", 00:05:17.472 "nvmf_discovery_add_referral", 00:05:17.472 "nvmf_subsystem_remove_listener", 00:05:17.472 "nvmf_subsystem_add_listener", 00:05:17.472 "nvmf_delete_subsystem", 00:05:17.472 "nvmf_create_subsystem", 00:05:17.472 "nvmf_get_subsystems", 00:05:17.472 "env_dpdk_get_mem_stats", 00:05:17.472 "nbd_get_disks", 00:05:17.472 "nbd_stop_disk", 00:05:17.472 "nbd_start_disk", 00:05:17.472 "ublk_recover_disk", 00:05:17.472 "ublk_get_disks", 00:05:17.472 "ublk_stop_disk", 00:05:17.472 "ublk_start_disk", 00:05:17.472 "ublk_destroy_target", 00:05:17.472 "ublk_create_target", 00:05:17.472 "virtio_blk_create_transport", 00:05:17.472 "virtio_blk_get_transports", 00:05:17.472 "vhost_controller_set_coalescing", 00:05:17.472 "vhost_get_controllers", 00:05:17.472 "vhost_delete_controller", 00:05:17.472 "vhost_create_blk_controller", 00:05:17.472 "vhost_scsi_controller_remove_target", 00:05:17.472 "vhost_scsi_controller_add_target", 00:05:17.472 "vhost_start_scsi_controller", 00:05:17.472 "vhost_create_scsi_controller", 00:05:17.472 "thread_set_cpumask", 00:05:17.472 "scheduler_set_options", 00:05:17.472 "framework_get_governor", 00:05:17.472 "framework_get_scheduler", 00:05:17.472 "framework_set_scheduler", 00:05:17.472 "framework_get_reactors", 00:05:17.472 "thread_get_io_channels", 00:05:17.472 "thread_get_pollers", 00:05:17.472 "thread_get_stats", 00:05:17.472 "framework_monitor_context_switch", 00:05:17.472 "spdk_kill_instance", 00:05:17.472 "log_enable_timestamps", 00:05:17.472 "log_get_flags", 00:05:17.472 "log_clear_flag", 00:05:17.472 "log_set_flag", 00:05:17.472 "log_get_level", 00:05:17.472 "log_set_level", 00:05:17.472 "log_get_print_level", 00:05:17.472 "log_set_print_level", 00:05:17.472 "framework_enable_cpumask_locks", 00:05:17.472 "framework_disable_cpumask_locks", 00:05:17.472 "framework_wait_init", 00:05:17.472 "framework_start_init", 00:05:17.472 "scsi_get_devices", 00:05:17.472 "bdev_get_histogram", 00:05:17.472 "bdev_enable_histogram", 00:05:17.472 "bdev_set_qos_limit", 00:05:17.472 "bdev_set_qd_sampling_period", 00:05:17.472 "bdev_get_bdevs", 00:05:17.472 "bdev_reset_iostat", 00:05:17.472 "bdev_get_iostat", 00:05:17.472 "bdev_examine", 00:05:17.472 "bdev_wait_for_examine", 00:05:17.472 "bdev_set_options", 00:05:17.473 "accel_get_stats", 00:05:17.473 "accel_set_options", 00:05:17.473 "accel_set_driver", 00:05:17.473 "accel_crypto_key_destroy", 00:05:17.473 "accel_crypto_keys_get", 00:05:17.473 "accel_crypto_key_create", 00:05:17.473 "accel_assign_opc", 00:05:17.473 "accel_get_module_info", 00:05:17.473 "accel_get_opc_assignments", 00:05:17.473 "vmd_rescan", 00:05:17.473 "vmd_remove_device", 00:05:17.473 "vmd_enable", 00:05:17.473 "sock_get_default_impl", 00:05:17.473 "sock_set_default_impl", 00:05:17.473 "sock_impl_set_options", 00:05:17.473 "sock_impl_get_options", 00:05:17.473 "iobuf_get_stats", 00:05:17.473 "iobuf_set_options", 00:05:17.473 "keyring_get_keys", 00:05:17.473 "vfu_tgt_set_base_path", 00:05:17.473 "framework_get_pci_devices", 00:05:17.473 "framework_get_config", 00:05:17.473 "framework_get_subsystems", 00:05:17.473 "fsdev_set_opts", 00:05:17.473 "fsdev_get_opts", 00:05:17.473 "trace_get_info", 00:05:17.473 "trace_get_tpoint_group_mask", 00:05:17.473 "trace_disable_tpoint_group", 00:05:17.473 "trace_enable_tpoint_group", 00:05:17.473 "trace_clear_tpoint_mask", 00:05:17.473 "trace_set_tpoint_mask", 00:05:17.473 "notify_get_notifications", 00:05:17.473 "notify_get_types", 00:05:17.473 "spdk_get_version", 00:05:17.473 "rpc_get_methods" 00:05:17.473 ] 00:05:17.473 14:07:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:17.473 14:07:18 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.473 14:07:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.473 14:07:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:17.473 14:07:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1442934 00:05:17.473 14:07:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1442934 ']' 00:05:17.473 14:07:18 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1442934 00:05:17.473 14:07:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:17.473 14:07:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.473 14:07:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1442934 00:05:17.473 14:07:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.473 14:07:18 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.473 14:07:18 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1442934' 00:05:17.473 killing process with pid 1442934 00:05:17.473 14:07:18 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1442934 00:05:17.473 14:07:18 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1442934 00:05:17.732 00:05:17.732 real 0m1.159s 00:05:17.732 user 0m1.954s 00:05:17.732 sys 0m0.449s 00:05:17.732 14:07:18 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.732 14:07:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.732 ************************************ 00:05:17.732 END TEST spdkcli_tcp 00:05:17.732 ************************************ 00:05:17.732 14:07:18 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:17.732 14:07:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.732 14:07:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.732 14:07:18 -- common/autotest_common.sh@10 -- # set +x 00:05:17.732 ************************************ 00:05:17.732 START TEST dpdk_mem_utility 00:05:17.732 ************************************ 00:05:17.732 14:07:18 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:17.991 * Looking for test storage... 00:05:17.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:17.991 14:07:18 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:17.991 14:07:18 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:17.991 14:07:18 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:17.991 14:07:18 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:17.991 14:07:18 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.991 14:07:18 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.991 14:07:18 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.991 14:07:18 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.991 14:07:18 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.991 14:07:18 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.991 14:07:18 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.992 14:07:18 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:17.992 14:07:18 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.992 14:07:18 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:17.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.992 --rc genhtml_branch_coverage=1 00:05:17.992 --rc genhtml_function_coverage=1 00:05:17.992 --rc genhtml_legend=1 00:05:17.992 --rc geninfo_all_blocks=1 00:05:17.992 --rc geninfo_unexecuted_blocks=1 00:05:17.992 00:05:17.992 ' 00:05:17.992 14:07:18 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:17.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.992 --rc genhtml_branch_coverage=1 00:05:17.992 --rc genhtml_function_coverage=1 00:05:17.992 --rc genhtml_legend=1 00:05:17.992 --rc geninfo_all_blocks=1 00:05:17.992 --rc geninfo_unexecuted_blocks=1 00:05:17.992 00:05:17.992 ' 00:05:17.992 14:07:18 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:17.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.992 --rc genhtml_branch_coverage=1 00:05:17.992 --rc genhtml_function_coverage=1 00:05:17.992 --rc genhtml_legend=1 00:05:17.992 --rc geninfo_all_blocks=1 00:05:17.992 --rc geninfo_unexecuted_blocks=1 00:05:17.992 00:05:17.992 ' 00:05:17.992 14:07:18 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:17.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.992 --rc genhtml_branch_coverage=1 00:05:17.992 --rc genhtml_function_coverage=1 00:05:17.992 --rc genhtml_legend=1 00:05:17.992 --rc geninfo_all_blocks=1 00:05:17.992 --rc geninfo_unexecuted_blocks=1 00:05:17.992 00:05:17.992 ' 00:05:17.992 14:07:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:17.992 14:07:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1443235 00:05:17.992 14:07:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.992 14:07:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1443235 00:05:17.992 14:07:18 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1443235 ']' 00:05:17.992 14:07:18 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.992 14:07:18 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.992 14:07:18 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.992 14:07:18 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.992 14:07:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.992 [2024-12-10 14:07:18.680616] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:17.992 [2024-12-10 14:07:18.680665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1443235 ] 00:05:18.251 [2024-12-10 14:07:18.761558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.251 [2024-12-10 14:07:18.801631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.818 14:07:19 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.818 14:07:19 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:18.818 14:07:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:18.818 14:07:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:18.818 14:07:19 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.818 14:07:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.818 { 00:05:18.818 "filename": "/tmp/spdk_mem_dump.txt" 00:05:18.818 } 00:05:18.818 14:07:19 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.818 14:07:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:19.077 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:19.077 1 heaps totaling size 818.000000 MiB 00:05:19.077 size: 818.000000 MiB heap id: 0 00:05:19.077 end heaps---------- 00:05:19.077 9 mempools totaling size 603.782043 MiB 00:05:19.077 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:19.077 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:19.077 size: 100.555481 MiB name: bdev_io_1443235 00:05:19.077 size: 50.003479 MiB name: msgpool_1443235 00:05:19.077 size: 36.509338 MiB name: fsdev_io_1443235 00:05:19.077 size: 21.763794 MiB name: PDU_Pool 00:05:19.077 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:19.077 size: 4.133484 MiB name: evtpool_1443235 00:05:19.077 size: 0.026123 MiB name: Session_Pool 00:05:19.077 end mempools------- 00:05:19.077 6 memzones totaling size 4.142822 MiB 00:05:19.077 size: 1.000366 MiB name: RG_ring_0_1443235 00:05:19.077 size: 1.000366 MiB name: RG_ring_1_1443235 00:05:19.077 size: 1.000366 MiB name: RG_ring_4_1443235 00:05:19.077 size: 1.000366 MiB name: RG_ring_5_1443235 00:05:19.077 size: 0.125366 MiB name: RG_ring_2_1443235 00:05:19.077 size: 0.015991 MiB name: RG_ring_3_1443235 00:05:19.077 end memzones------- 00:05:19.077 14:07:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:19.077 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:19.077 list of free elements. size: 10.852478 MiB 00:05:19.077 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:19.077 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:19.077 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:19.077 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:19.077 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:19.077 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:19.077 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:19.077 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:19.077 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:19.077 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:19.077 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:19.077 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:19.077 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:19.077 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:19.078 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:19.078 list of standard malloc elements. size: 199.218628 MiB 00:05:19.078 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:19.078 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:19.078 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:19.078 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:19.078 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:19.078 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:19.078 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:19.078 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:19.078 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:19.078 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:19.078 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:19.078 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:19.078 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:19.078 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:19.078 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:19.078 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:19.078 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:19.078 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:19.078 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:19.078 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:19.078 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:19.078 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:19.078 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:19.078 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:19.078 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:19.078 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:19.078 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:19.078 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:19.078 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:19.078 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:19.078 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:19.078 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:19.078 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:19.078 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:19.078 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:19.078 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:19.078 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:19.078 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:19.078 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:19.078 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:19.078 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:19.078 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:19.078 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:19.078 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:19.078 list of memzone associated elements. size: 607.928894 MiB 00:05:19.078 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:19.078 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:19.078 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:19.078 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:19.078 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:19.078 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1443235_0 00:05:19.078 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:19.078 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1443235_0 00:05:19.078 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:19.078 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1443235_0 00:05:19.078 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:19.078 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:19.078 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:19.078 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:19.078 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:19.078 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1443235_0 00:05:19.078 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:19.078 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1443235 00:05:19.078 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:19.078 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1443235 00:05:19.078 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:19.078 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:19.078 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:19.078 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:19.078 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:19.078 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:19.078 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:19.078 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:19.078 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:19.078 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1443235 00:05:19.078 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:19.078 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1443235 00:05:19.078 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:19.078 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1443235 00:05:19.078 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:19.078 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1443235 00:05:19.078 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:19.078 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1443235 00:05:19.078 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:19.078 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1443235 00:05:19.078 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:19.078 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:19.078 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:19.078 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:19.078 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:19.078 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:19.078 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:19.078 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1443235 00:05:19.078 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:19.078 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1443235 00:05:19.078 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:19.078 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:19.078 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:19.078 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:19.078 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:19.078 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1443235 00:05:19.078 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:19.078 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:19.078 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:19.078 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1443235 00:05:19.078 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:19.078 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1443235 00:05:19.078 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:19.078 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1443235 00:05:19.078 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:19.078 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:19.078 14:07:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:19.078 14:07:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1443235 00:05:19.078 14:07:19 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1443235 ']' 00:05:19.078 14:07:19 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1443235 00:05:19.078 14:07:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:19.078 14:07:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.078 14:07:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1443235 00:05:19.078 14:07:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.078 14:07:19 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.078 14:07:19 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1443235' 00:05:19.078 killing process with pid 1443235 00:05:19.078 14:07:19 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1443235 00:05:19.078 14:07:19 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1443235 00:05:19.337 00:05:19.337 real 0m1.494s 00:05:19.337 user 0m1.585s 00:05:19.337 sys 0m0.423s 00:05:19.337 14:07:19 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.337 14:07:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:19.337 ************************************ 00:05:19.337 END TEST dpdk_mem_utility 00:05:19.337 ************************************ 00:05:19.337 14:07:19 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:19.337 14:07:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.337 14:07:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.337 14:07:19 -- common/autotest_common.sh@10 -- # set +x 00:05:19.337 ************************************ 00:05:19.337 START TEST event 00:05:19.337 ************************************ 00:05:19.337 14:07:20 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:19.596 * Looking for test storage... 00:05:19.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:19.596 14:07:20 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.596 14:07:20 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.596 14:07:20 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.596 14:07:20 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.596 14:07:20 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.596 14:07:20 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.596 14:07:20 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.596 14:07:20 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.596 14:07:20 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.596 14:07:20 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.596 14:07:20 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.596 14:07:20 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.596 14:07:20 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.596 14:07:20 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.596 14:07:20 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.596 14:07:20 event -- scripts/common.sh@344 -- # case "$op" in 00:05:19.596 14:07:20 event -- scripts/common.sh@345 -- # : 1 00:05:19.596 14:07:20 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.596 14:07:20 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.596 14:07:20 event -- scripts/common.sh@365 -- # decimal 1 00:05:19.596 14:07:20 event -- scripts/common.sh@353 -- # local d=1 00:05:19.596 14:07:20 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.596 14:07:20 event -- scripts/common.sh@355 -- # echo 1 00:05:19.596 14:07:20 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.596 14:07:20 event -- scripts/common.sh@366 -- # decimal 2 00:05:19.596 14:07:20 event -- scripts/common.sh@353 -- # local d=2 00:05:19.596 14:07:20 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.596 14:07:20 event -- scripts/common.sh@355 -- # echo 2 00:05:19.596 14:07:20 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.596 14:07:20 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.596 14:07:20 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.596 14:07:20 event -- scripts/common.sh@368 -- # return 0 00:05:19.596 14:07:20 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.596 14:07:20 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.596 --rc genhtml_branch_coverage=1 00:05:19.596 --rc genhtml_function_coverage=1 00:05:19.596 --rc genhtml_legend=1 00:05:19.596 --rc geninfo_all_blocks=1 00:05:19.596 --rc geninfo_unexecuted_blocks=1 00:05:19.596 00:05:19.596 ' 00:05:19.596 14:07:20 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.596 --rc genhtml_branch_coverage=1 00:05:19.596 --rc genhtml_function_coverage=1 00:05:19.596 --rc genhtml_legend=1 00:05:19.596 --rc geninfo_all_blocks=1 00:05:19.596 --rc geninfo_unexecuted_blocks=1 00:05:19.596 00:05:19.596 ' 00:05:19.596 14:07:20 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.596 --rc genhtml_branch_coverage=1 00:05:19.596 --rc genhtml_function_coverage=1 00:05:19.596 --rc genhtml_legend=1 00:05:19.596 --rc geninfo_all_blocks=1 00:05:19.596 --rc geninfo_unexecuted_blocks=1 00:05:19.596 00:05:19.596 ' 00:05:19.596 14:07:20 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.596 --rc genhtml_branch_coverage=1 00:05:19.596 --rc genhtml_function_coverage=1 00:05:19.596 --rc genhtml_legend=1 00:05:19.596 --rc geninfo_all_blocks=1 00:05:19.596 --rc geninfo_unexecuted_blocks=1 00:05:19.596 00:05:19.596 ' 00:05:19.596 14:07:20 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:19.596 14:07:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:19.596 14:07:20 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:19.596 14:07:20 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:19.596 14:07:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.596 14:07:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.596 ************************************ 00:05:19.596 START TEST event_perf 00:05:19.596 ************************************ 00:05:19.596 14:07:20 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:19.596 Running I/O for 1 seconds...[2024-12-10 14:07:20.256380] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:19.596 [2024-12-10 14:07:20.256447] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1443599 ] 00:05:19.855 [2024-12-10 14:07:20.341853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.855 [2024-12-10 14:07:20.386601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.855 [2024-12-10 14:07:20.386709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.855 [2024-12-10 14:07:20.386739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.855 [2024-12-10 14:07:20.386740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.790 Running I/O for 1 seconds... 00:05:20.790 lcore 0: 208554 00:05:20.790 lcore 1: 208552 00:05:20.790 lcore 2: 208553 00:05:20.790 lcore 3: 208553 00:05:20.790 done. 00:05:20.790 00:05:20.790 real 0m1.191s 00:05:20.790 user 0m4.106s 00:05:20.790 sys 0m0.083s 00:05:20.790 14:07:21 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.790 14:07:21 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.790 ************************************ 00:05:20.791 END TEST event_perf 00:05:20.791 ************************************ 00:05:20.791 14:07:21 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:20.791 14:07:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:20.791 14:07:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.791 14:07:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.791 ************************************ 00:05:20.791 START TEST event_reactor 00:05:20.791 ************************************ 00:05:20.791 14:07:21 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:20.791 [2024-12-10 14:07:21.514105] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:20.791 [2024-12-10 14:07:21.514182] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1443810 ] 00:05:21.049 [2024-12-10 14:07:21.595623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.049 [2024-12-10 14:07:21.633421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.995 test_start 00:05:21.995 oneshot 00:05:21.995 tick 100 00:05:21.995 tick 100 00:05:21.995 tick 250 00:05:21.995 tick 100 00:05:21.995 tick 100 00:05:21.995 tick 250 00:05:21.995 tick 100 00:05:21.995 tick 500 00:05:21.995 tick 100 00:05:21.995 tick 100 00:05:21.995 tick 250 00:05:21.995 tick 100 00:05:21.995 tick 100 00:05:21.995 test_end 00:05:21.995 00:05:21.995 real 0m1.172s 00:05:21.995 user 0m1.090s 00:05:21.995 sys 0m0.079s 00:05:21.995 14:07:22 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.995 14:07:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:21.995 ************************************ 00:05:21.995 END TEST event_reactor 00:05:21.995 ************************************ 00:05:21.995 14:07:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:21.995 14:07:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:21.995 14:07:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.995 14:07:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.254 ************************************ 00:05:22.254 START TEST event_reactor_perf 00:05:22.254 ************************************ 00:05:22.254 14:07:22 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.254 [2024-12-10 14:07:22.761909] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:22.254 [2024-12-10 14:07:22.761975] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1444046 ] 00:05:22.254 [2024-12-10 14:07:22.846339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.254 [2024-12-10 14:07:22.884659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.189 test_start 00:05:23.189 test_end 00:05:23.189 Performance: 520596 events per second 00:05:23.189 00:05:23.189 real 0m1.180s 00:05:23.189 user 0m1.097s 00:05:23.189 sys 0m0.079s 00:05:23.189 14:07:23 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.189 14:07:23 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.189 ************************************ 00:05:23.189 END TEST event_reactor_perf 00:05:23.189 ************************************ 00:05:23.448 14:07:23 event -- event/event.sh@49 -- # uname -s 00:05:23.448 14:07:23 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:23.449 14:07:23 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:23.449 14:07:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.449 14:07:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.449 14:07:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.449 ************************************ 00:05:23.449 START TEST event_scheduler 00:05:23.449 ************************************ 00:05:23.449 14:07:23 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:23.449 * Looking for test storage... 00:05:23.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:23.449 14:07:24 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.449 14:07:24 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.449 14:07:24 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.449 14:07:24 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.449 14:07:24 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:23.449 14:07:24 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.449 14:07:24 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.449 --rc genhtml_branch_coverage=1 00:05:23.449 --rc genhtml_function_coverage=1 00:05:23.449 --rc genhtml_legend=1 00:05:23.449 --rc geninfo_all_blocks=1 00:05:23.449 --rc geninfo_unexecuted_blocks=1 00:05:23.449 00:05:23.449 ' 00:05:23.449 14:07:24 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.449 --rc genhtml_branch_coverage=1 00:05:23.449 --rc genhtml_function_coverage=1 00:05:23.449 --rc genhtml_legend=1 00:05:23.449 --rc geninfo_all_blocks=1 00:05:23.449 --rc geninfo_unexecuted_blocks=1 00:05:23.449 00:05:23.449 ' 00:05:23.449 14:07:24 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.449 --rc genhtml_branch_coverage=1 00:05:23.449 --rc genhtml_function_coverage=1 00:05:23.449 --rc genhtml_legend=1 00:05:23.449 --rc geninfo_all_blocks=1 00:05:23.449 --rc geninfo_unexecuted_blocks=1 00:05:23.449 00:05:23.449 ' 00:05:23.449 14:07:24 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.449 --rc genhtml_branch_coverage=1 00:05:23.449 --rc genhtml_function_coverage=1 00:05:23.449 --rc genhtml_legend=1 00:05:23.449 --rc geninfo_all_blocks=1 00:05:23.449 --rc geninfo_unexecuted_blocks=1 00:05:23.449 00:05:23.449 ' 00:05:23.449 14:07:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:23.449 14:07:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1444332 00:05:23.449 14:07:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.449 14:07:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:23.449 14:07:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1444332 00:05:23.449 14:07:24 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1444332 ']' 00:05:23.449 14:07:24 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.449 14:07:24 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.449 14:07:24 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.449 14:07:24 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.449 14:07:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.709 [2024-12-10 14:07:24.217584] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:23.709 [2024-12-10 14:07:24.217633] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1444332 ] 00:05:23.709 [2024-12-10 14:07:24.286222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:23.709 [2024-12-10 14:07:24.332226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.709 [2024-12-10 14:07:24.332267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.709 [2024-12-10 14:07:24.332305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.709 [2024-12-10 14:07:24.332305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.709 14:07:24 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.709 14:07:24 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:23.709 14:07:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:23.709 14:07:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.709 14:07:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.709 [2024-12-10 14:07:24.381023] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:23.709 [2024-12-10 14:07:24.381041] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:23.709 [2024-12-10 14:07:24.381051] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:23.709 [2024-12-10 14:07:24.381057] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:23.709 [2024-12-10 14:07:24.381063] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:23.709 14:07:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.709 14:07:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:23.709 14:07:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.709 14:07:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.968 [2024-12-10 14:07:24.455448] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:23.968 14:07:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.968 14:07:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:23.968 14:07:24 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.968 14:07:24 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.968 14:07:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.968 ************************************ 00:05:23.968 START TEST scheduler_create_thread 00:05:23.968 ************************************ 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.968 2 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.968 3 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.968 4 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.968 5 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.968 6 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.968 7 00:05:23.968 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.969 8 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.969 9 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.969 10 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.969 14:07:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.911 14:07:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.911 14:07:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:24.911 14:07:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.911 14:07:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.288 14:07:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.288 14:07:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:26.288 14:07:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:26.288 14:07:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.288 14:07:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.247 14:07:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.247 00:05:27.247 real 0m3.382s 00:05:27.247 user 0m0.028s 00:05:27.247 sys 0m0.002s 00:05:27.247 14:07:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.247 14:07:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.247 ************************************ 00:05:27.247 END TEST scheduler_create_thread 00:05:27.247 ************************************ 00:05:27.247 14:07:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:27.247 14:07:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1444332 00:05:27.247 14:07:27 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1444332 ']' 00:05:27.247 14:07:27 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1444332 00:05:27.247 14:07:27 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:27.247 14:07:27 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.247 14:07:27 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1444332 00:05:27.505 14:07:27 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:27.505 14:07:27 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:27.505 14:07:27 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1444332' 00:05:27.505 killing process with pid 1444332 00:05:27.505 14:07:27 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1444332 00:05:27.505 14:07:27 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1444332 00:05:28.194 [2024-12-10 14:07:28.255384] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:28.194 00:05:28.194 real 0m4.462s 00:05:28.194 user 0m7.846s 00:05:28.194 sys 0m0.358s 00:05:28.194 14:07:28 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.194 14:07:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.194 ************************************ 00:05:28.194 END TEST event_scheduler 00:05:28.194 ************************************ 00:05:28.194 14:07:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:28.194 14:07:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:28.194 14:07:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.194 14:07:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.194 14:07:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.194 ************************************ 00:05:28.194 START TEST app_repeat 00:05:28.194 ************************************ 00:05:28.194 14:07:28 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:28.194 14:07:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.194 14:07:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.194 14:07:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:28.195 14:07:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.195 14:07:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:28.195 14:07:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:28.195 14:07:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:28.195 14:07:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1445209 00:05:28.195 14:07:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.195 14:07:28 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:28.195 14:07:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1445209' 00:05:28.195 Process app_repeat pid: 1445209 00:05:28.195 14:07:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.195 14:07:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:28.195 spdk_app_start Round 0 00:05:28.195 14:07:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1445209 /var/tmp/spdk-nbd.sock 00:05:28.195 14:07:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1445209 ']' 00:05:28.195 14:07:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.195 14:07:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.195 14:07:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.195 14:07:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.195 14:07:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.195 [2024-12-10 14:07:28.568251] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:28.195 [2024-12-10 14:07:28.568301] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1445209 ] 00:05:28.195 [2024-12-10 14:07:28.649586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.195 [2024-12-10 14:07:28.689172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.195 [2024-12-10 14:07:28.689173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.195 14:07:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.195 14:07:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:28.195 14:07:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.453 Malloc0 00:05:28.453 14:07:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.453 Malloc1 00:05:28.453 14:07:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.453 14:07:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.453 14:07:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.453 14:07:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.453 14:07:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.453 14:07:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.711 14:07:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.711 14:07:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.711 14:07:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.711 14:07:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.711 14:07:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.711 14:07:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.711 14:07:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:28.711 14:07:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.711 14:07:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.711 14:07:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.711 /dev/nbd0 00:05:28.711 14:07:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.711 14:07:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.711 14:07:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:28.711 14:07:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:28.711 14:07:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:28.711 14:07:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:28.711 14:07:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:28.711 14:07:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:28.711 14:07:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:28.711 14:07:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:28.711 14:07:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.711 1+0 records in 00:05:28.711 1+0 records out 00:05:28.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228615 s, 17.9 MB/s 00:05:28.711 14:07:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.711 14:07:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:28.711 14:07:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.711 14:07:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:28.711 14:07:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:28.711 14:07:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.711 14:07:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.711 14:07:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:29.005 /dev/nbd1 00:05:29.005 14:07:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:29.005 14:07:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:29.005 14:07:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:29.005 14:07:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:29.005 14:07:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.005 14:07:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.005 14:07:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:29.005 14:07:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:29.005 14:07:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.005 14:07:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.005 14:07:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.005 1+0 records in 00:05:29.005 1+0 records out 00:05:29.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245262 s, 16.7 MB/s 00:05:29.005 14:07:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.005 14:07:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:29.005 14:07:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.005 14:07:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.005 14:07:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:29.005 14:07:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.005 14:07:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.005 14:07:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.005 14:07:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.005 14:07:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:29.282 { 00:05:29.282 "nbd_device": "/dev/nbd0", 00:05:29.282 "bdev_name": "Malloc0" 00:05:29.282 }, 00:05:29.282 { 00:05:29.282 "nbd_device": "/dev/nbd1", 00:05:29.282 "bdev_name": "Malloc1" 00:05:29.282 } 00:05:29.282 ]' 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:29.282 { 00:05:29.282 "nbd_device": "/dev/nbd0", 00:05:29.282 "bdev_name": "Malloc0" 00:05:29.282 }, 00:05:29.282 { 00:05:29.282 "nbd_device": "/dev/nbd1", 00:05:29.282 "bdev_name": "Malloc1" 00:05:29.282 } 00:05:29.282 ]' 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:29.282 /dev/nbd1' 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:29.282 /dev/nbd1' 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:29.282 256+0 records in 00:05:29.282 256+0 records out 00:05:29.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101529 s, 103 MB/s 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:29.282 256+0 records in 00:05:29.282 256+0 records out 00:05:29.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135145 s, 77.6 MB/s 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:29.282 256+0 records in 00:05:29.282 256+0 records out 00:05:29.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148254 s, 70.7 MB/s 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:29.282 14:07:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.282 14:07:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:29.282 14:07:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.282 14:07:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.282 14:07:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:29.282 14:07:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:29.282 14:07:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.282 14:07:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.541 14:07:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.541 14:07:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.541 14:07:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.541 14:07:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.541 14:07:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.541 14:07:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.541 14:07:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.541 14:07:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.541 14:07:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.541 14:07:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.800 14:07:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.800 14:07:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.800 14:07:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.800 14:07:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.800 14:07:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.800 14:07:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.800 14:07:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.800 14:07:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.800 14:07:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.800 14:07:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.800 14:07:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.059 14:07:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.059 14:07:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.059 14:07:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.059 14:07:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.059 14:07:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.059 14:07:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.059 14:07:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:30.059 14:07:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.059 14:07:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.059 14:07:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.059 14:07:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.059 14:07:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.059 14:07:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.318 14:07:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.318 [2024-12-10 14:07:31.023559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.577 [2024-12-10 14:07:31.060330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.577 [2024-12-10 14:07:31.060330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.577 [2024-12-10 14:07:31.101162] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.577 [2024-12-10 14:07:31.101204] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.864 14:07:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:33.864 14:07:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:33.864 spdk_app_start Round 1 00:05:33.864 14:07:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1445209 /var/tmp/spdk-nbd.sock 00:05:33.864 14:07:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1445209 ']' 00:05:33.864 14:07:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.864 14:07:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.864 14:07:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.864 14:07:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.864 14:07:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.864 14:07:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.864 14:07:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:33.864 14:07:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.864 Malloc0 00:05:33.864 14:07:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.864 Malloc1 00:05:33.864 14:07:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.864 14:07:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.864 14:07:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.864 14:07:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.864 14:07:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.864 14:07:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.864 14:07:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.864 14:07:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.864 14:07:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.864 14:07:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.864 14:07:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.864 14:07:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.864 14:07:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.864 14:07:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.864 14:07:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.864 14:07:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:34.123 /dev/nbd0 00:05:34.123 14:07:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.123 14:07:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.123 14:07:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:34.123 14:07:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:34.123 14:07:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:34.123 14:07:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:34.123 14:07:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:34.123 14:07:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:34.123 14:07:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:34.123 14:07:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:34.123 14:07:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.123 1+0 records in 00:05:34.123 1+0 records out 00:05:34.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230388 s, 17.8 MB/s 00:05:34.123 14:07:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.123 14:07:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:34.123 14:07:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.123 14:07:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:34.123 14:07:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:34.123 14:07:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.123 14:07:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.123 14:07:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.382 /dev/nbd1 00:05:34.382 14:07:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.382 14:07:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.382 14:07:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:34.382 14:07:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:34.382 14:07:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:34.382 14:07:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:34.382 14:07:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:34.382 14:07:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:34.382 14:07:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:34.382 14:07:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:34.382 14:07:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.382 1+0 records in 00:05:34.382 1+0 records out 00:05:34.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211155 s, 19.4 MB/s 00:05:34.382 14:07:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.382 14:07:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:34.382 14:07:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.382 14:07:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:34.382 14:07:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:34.382 14:07:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.382 14:07:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.382 14:07:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.382 14:07:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.382 14:07:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.641 { 00:05:34.641 "nbd_device": "/dev/nbd0", 00:05:34.641 "bdev_name": "Malloc0" 00:05:34.641 }, 00:05:34.641 { 00:05:34.641 "nbd_device": "/dev/nbd1", 00:05:34.641 "bdev_name": "Malloc1" 00:05:34.641 } 00:05:34.641 ]' 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.641 { 00:05:34.641 "nbd_device": "/dev/nbd0", 00:05:34.641 "bdev_name": "Malloc0" 00:05:34.641 }, 00:05:34.641 { 00:05:34.641 "nbd_device": "/dev/nbd1", 00:05:34.641 "bdev_name": "Malloc1" 00:05:34.641 } 00:05:34.641 ]' 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.641 /dev/nbd1' 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.641 /dev/nbd1' 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.641 256+0 records in 00:05:34.641 256+0 records out 00:05:34.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010561 s, 99.3 MB/s 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.641 256+0 records in 00:05:34.641 256+0 records out 00:05:34.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139013 s, 75.4 MB/s 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.641 256+0 records in 00:05:34.641 256+0 records out 00:05:34.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146342 s, 71.7 MB/s 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.641 14:07:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.900 14:07:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.900 14:07:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.900 14:07:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.900 14:07:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.900 14:07:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.900 14:07:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.900 14:07:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.900 14:07:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.900 14:07:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.900 14:07:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:35.159 14:07:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:35.159 14:07:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:35.159 14:07:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:35.159 14:07:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.159 14:07:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.159 14:07:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:35.159 14:07:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.159 14:07:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.159 14:07:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.159 14:07:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.159 14:07:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.418 14:07:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.418 14:07:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.418 14:07:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.418 14:07:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.418 14:07:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.418 14:07:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.418 14:07:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:35.418 14:07:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.418 14:07:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.418 14:07:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:35.418 14:07:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:35.418 14:07:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:35.418 14:07:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.676 14:07:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:35.676 [2024-12-10 14:07:36.340993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.676 [2024-12-10 14:07:36.379376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.676 [2024-12-10 14:07:36.379377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.935 [2024-12-10 14:07:36.420708] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.935 [2024-12-10 14:07:36.420741] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.469 14:07:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.469 14:07:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:38.469 spdk_app_start Round 2 00:05:38.469 14:07:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1445209 /var/tmp/spdk-nbd.sock 00:05:38.469 14:07:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1445209 ']' 00:05:38.469 14:07:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.469 14:07:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.469 14:07:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.469 14:07:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.469 14:07:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.727 14:07:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.727 14:07:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:38.727 14:07:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.986 Malloc0 00:05:38.986 14:07:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.245 Malloc1 00:05:39.245 14:07:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.245 14:07:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.245 14:07:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.245 14:07:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.245 14:07:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.245 14:07:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.245 14:07:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.245 14:07:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.245 14:07:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.245 14:07:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.245 14:07:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.245 14:07:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.245 14:07:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:39.245 14:07:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.245 14:07:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.245 14:07:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.503 /dev/nbd0 00:05:39.503 14:07:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.503 14:07:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.503 14:07:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:39.503 14:07:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:39.504 14:07:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:39.504 14:07:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:39.504 14:07:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:39.504 14:07:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:39.504 14:07:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:39.504 14:07:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:39.504 14:07:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.504 1+0 records in 00:05:39.504 1+0 records out 00:05:39.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000105578 s, 38.8 MB/s 00:05:39.504 14:07:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.504 14:07:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:39.504 14:07:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.504 14:07:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:39.504 14:07:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:39.504 14:07:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.504 14:07:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.504 14:07:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.762 /dev/nbd1 00:05:39.762 14:07:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.762 14:07:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.762 14:07:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:39.762 14:07:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:39.762 14:07:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:39.762 14:07:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:39.762 14:07:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:39.762 14:07:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:39.762 14:07:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:39.762 14:07:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:39.762 14:07:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.762 1+0 records in 00:05:39.762 1+0 records out 00:05:39.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216101 s, 19.0 MB/s 00:05:39.762 14:07:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.762 14:07:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:39.762 14:07:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.762 14:07:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:39.762 14:07:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:39.762 14:07:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.762 14:07:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.762 14:07:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.763 14:07:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.763 14:07:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.021 { 00:05:40.021 "nbd_device": "/dev/nbd0", 00:05:40.021 "bdev_name": "Malloc0" 00:05:40.021 }, 00:05:40.021 { 00:05:40.021 "nbd_device": "/dev/nbd1", 00:05:40.021 "bdev_name": "Malloc1" 00:05:40.021 } 00:05:40.021 ]' 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.021 { 00:05:40.021 "nbd_device": "/dev/nbd0", 00:05:40.021 "bdev_name": "Malloc0" 00:05:40.021 }, 00:05:40.021 { 00:05:40.021 "nbd_device": "/dev/nbd1", 00:05:40.021 "bdev_name": "Malloc1" 00:05:40.021 } 00:05:40.021 ]' 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.021 /dev/nbd1' 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.021 /dev/nbd1' 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.021 256+0 records in 00:05:40.021 256+0 records out 00:05:40.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106232 s, 98.7 MB/s 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.021 256+0 records in 00:05:40.021 256+0 records out 00:05:40.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149914 s, 69.9 MB/s 00:05:40.021 14:07:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.022 256+0 records in 00:05:40.022 256+0 records out 00:05:40.022 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151633 s, 69.2 MB/s 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.022 14:07:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.280 14:07:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.280 14:07:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.280 14:07:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.280 14:07:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.280 14:07:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.280 14:07:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.280 14:07:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.280 14:07:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.280 14:07:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.280 14:07:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.538 14:07:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.538 14:07:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.538 14:07:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.538 14:07:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.538 14:07:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.538 14:07:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.538 14:07:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.538 14:07:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.538 14:07:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.538 14:07:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.538 14:07:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.538 14:07:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.538 14:07:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.538 14:07:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.797 14:07:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.797 14:07:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.797 14:07:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.797 14:07:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:40.797 14:07:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.797 14:07:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.797 14:07:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.797 14:07:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.797 14:07:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.797 14:07:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.797 14:07:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.055 [2024-12-10 14:07:41.672309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.055 [2024-12-10 14:07:41.707505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.055 [2024-12-10 14:07:41.707506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.055 [2024-12-10 14:07:41.747984] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.055 [2024-12-10 14:07:41.748025] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.343 14:07:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1445209 /var/tmp/spdk-nbd.sock 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1445209 ']' 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:44.343 14:07:44 event.app_repeat -- event/event.sh@39 -- # killprocess 1445209 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1445209 ']' 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1445209 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1445209 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1445209' 00:05:44.343 killing process with pid 1445209 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1445209 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1445209 00:05:44.343 spdk_app_start is called in Round 0. 00:05:44.343 Shutdown signal received, stop current app iteration 00:05:44.343 Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 reinitialization... 00:05:44.343 spdk_app_start is called in Round 1. 00:05:44.343 Shutdown signal received, stop current app iteration 00:05:44.343 Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 reinitialization... 00:05:44.343 spdk_app_start is called in Round 2. 00:05:44.343 Shutdown signal received, stop current app iteration 00:05:44.343 Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 reinitialization... 00:05:44.343 spdk_app_start is called in Round 3. 00:05:44.343 Shutdown signal received, stop current app iteration 00:05:44.343 14:07:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:44.343 14:07:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:44.343 00:05:44.343 real 0m16.391s 00:05:44.343 user 0m36.066s 00:05:44.343 sys 0m2.520s 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.343 14:07:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.343 ************************************ 00:05:44.343 END TEST app_repeat 00:05:44.343 ************************************ 00:05:44.343 14:07:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:44.343 14:07:44 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:44.343 14:07:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.343 14:07:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.343 14:07:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.343 ************************************ 00:05:44.343 START TEST cpu_locks 00:05:44.344 ************************************ 00:05:44.344 14:07:44 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:44.344 * Looking for test storage... 00:05:44.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:44.601 14:07:45 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:44.601 14:07:45 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:44.601 14:07:45 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:44.601 14:07:45 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.601 14:07:45 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:44.601 14:07:45 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.601 14:07:45 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:44.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.601 --rc genhtml_branch_coverage=1 00:05:44.601 --rc genhtml_function_coverage=1 00:05:44.601 --rc genhtml_legend=1 00:05:44.601 --rc geninfo_all_blocks=1 00:05:44.601 --rc geninfo_unexecuted_blocks=1 00:05:44.601 00:05:44.601 ' 00:05:44.601 14:07:45 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:44.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.601 --rc genhtml_branch_coverage=1 00:05:44.601 --rc genhtml_function_coverage=1 00:05:44.601 --rc genhtml_legend=1 00:05:44.601 --rc geninfo_all_blocks=1 00:05:44.601 --rc geninfo_unexecuted_blocks=1 00:05:44.601 00:05:44.601 ' 00:05:44.601 14:07:45 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:44.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.601 --rc genhtml_branch_coverage=1 00:05:44.601 --rc genhtml_function_coverage=1 00:05:44.601 --rc genhtml_legend=1 00:05:44.601 --rc geninfo_all_blocks=1 00:05:44.601 --rc geninfo_unexecuted_blocks=1 00:05:44.601 00:05:44.601 ' 00:05:44.601 14:07:45 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:44.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.601 --rc genhtml_branch_coverage=1 00:05:44.601 --rc genhtml_function_coverage=1 00:05:44.601 --rc genhtml_legend=1 00:05:44.601 --rc geninfo_all_blocks=1 00:05:44.601 --rc geninfo_unexecuted_blocks=1 00:05:44.601 00:05:44.601 ' 00:05:44.601 14:07:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:44.601 14:07:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:44.601 14:07:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:44.601 14:07:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:44.601 14:07:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.601 14:07:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.601 14:07:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.601 ************************************ 00:05:44.601 START TEST default_locks 00:05:44.601 ************************************ 00:05:44.601 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:44.601 14:07:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1448237 00:05:44.601 14:07:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1448237 00:05:44.601 14:07:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.601 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1448237 ']' 00:05:44.601 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.601 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.601 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.601 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.601 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.601 [2024-12-10 14:07:45.260059] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:44.601 [2024-12-10 14:07:45.260099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448237 ] 00:05:44.601 [2024-12-10 14:07:45.339262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.860 [2024-12-10 14:07:45.379530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.860 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.860 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:44.860 14:07:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1448237 00:05:44.860 14:07:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1448237 00:05:44.860 14:07:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.119 lslocks: write error 00:05:45.119 14:07:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1448237 00:05:45.119 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1448237 ']' 00:05:45.119 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1448237 00:05:45.119 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:45.119 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.119 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1448237 00:05:45.119 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.119 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.119 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1448237' 00:05:45.119 killing process with pid 1448237 00:05:45.119 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1448237 00:05:45.119 14:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1448237 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1448237 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1448237 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1448237 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1448237 ']' 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1448237) - No such process 00:05:45.687 ERROR: process (pid: 1448237) is no longer running 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:45.687 00:05:45.687 real 0m0.934s 00:05:45.687 user 0m0.878s 00:05:45.687 sys 0m0.434s 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.687 14:07:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.687 ************************************ 00:05:45.687 END TEST default_locks 00:05:45.687 ************************************ 00:05:45.687 14:07:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:45.687 14:07:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.687 14:07:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.687 14:07:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.687 ************************************ 00:05:45.687 START TEST default_locks_via_rpc 00:05:45.687 ************************************ 00:05:45.687 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:45.687 14:07:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1448398 00:05:45.687 14:07:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1448398 00:05:45.687 14:07:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.687 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1448398 ']' 00:05:45.687 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.687 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.687 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.687 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.687 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.687 [2024-12-10 14:07:46.265390] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:45.687 [2024-12-10 14:07:46.265435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448398 ] 00:05:45.687 [2024-12-10 14:07:46.343284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.688 [2024-12-10 14:07:46.383590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.946 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.946 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:45.946 14:07:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:45.946 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.946 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.947 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.947 14:07:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:45.947 14:07:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:45.947 14:07:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:45.947 14:07:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:45.947 14:07:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.947 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.947 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.947 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.947 14:07:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1448398 00:05:45.947 14:07:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1448398 00:05:45.947 14:07:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.205 14:07:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1448398 00:05:46.205 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1448398 ']' 00:05:46.205 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1448398 00:05:46.205 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:46.205 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.205 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1448398 00:05:46.205 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.205 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.205 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1448398' 00:05:46.205 killing process with pid 1448398 00:05:46.205 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1448398 00:05:46.206 14:07:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1448398 00:05:46.774 00:05:46.774 real 0m1.018s 00:05:46.774 user 0m0.972s 00:05:46.774 sys 0m0.459s 00:05:46.774 14:07:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.774 14:07:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.774 ************************************ 00:05:46.774 END TEST default_locks_via_rpc 00:05:46.774 ************************************ 00:05:46.774 14:07:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:46.774 14:07:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.774 14:07:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.774 14:07:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.774 ************************************ 00:05:46.774 START TEST non_locking_app_on_locked_coremask 00:05:46.774 ************************************ 00:05:46.774 14:07:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:46.774 14:07:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1448533 00:05:46.774 14:07:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1448533 /var/tmp/spdk.sock 00:05:46.774 14:07:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.774 14:07:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1448533 ']' 00:05:46.774 14:07:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.774 14:07:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.774 14:07:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.774 14:07:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.774 14:07:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.774 [2024-12-10 14:07:47.352716] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:46.774 [2024-12-10 14:07:47.352759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448533 ] 00:05:46.774 [2024-12-10 14:07:47.436732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.774 [2024-12-10 14:07:47.476868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.711 14:07:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.711 14:07:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:47.711 14:07:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:47.711 14:07:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1448762 00:05:47.711 14:07:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1448762 /var/tmp/spdk2.sock 00:05:47.711 14:07:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1448762 ']' 00:05:47.711 14:07:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.711 14:07:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.711 14:07:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.711 14:07:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.711 14:07:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.711 [2024-12-10 14:07:48.209806] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:47.711 [2024-12-10 14:07:48.209850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448762 ] 00:05:47.711 [2024-12-10 14:07:48.301641] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.711 [2024-12-10 14:07:48.301662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.711 [2024-12-10 14:07:48.376459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.648 14:07:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.648 14:07:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:48.648 14:07:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1448533 00:05:48.648 14:07:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1448533 00:05:48.648 14:07:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.907 lslocks: write error 00:05:48.908 14:07:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1448533 00:05:48.908 14:07:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1448533 ']' 00:05:48.908 14:07:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1448533 00:05:48.908 14:07:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:48.908 14:07:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.908 14:07:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1448533 00:05:48.908 14:07:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.908 14:07:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.908 14:07:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1448533' 00:05:48.908 killing process with pid 1448533 00:05:48.908 14:07:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1448533 00:05:48.908 14:07:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1448533 00:05:49.476 14:07:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1448762 00:05:49.476 14:07:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1448762 ']' 00:05:49.476 14:07:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1448762 00:05:49.476 14:07:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:49.476 14:07:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.476 14:07:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1448762 00:05:49.476 14:07:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.476 14:07:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.476 14:07:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1448762' 00:05:49.476 killing process with pid 1448762 00:05:49.476 14:07:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1448762 00:05:49.476 14:07:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1448762 00:05:49.735 00:05:49.735 real 0m3.069s 00:05:49.735 user 0m3.305s 00:05:49.735 sys 0m0.893s 00:05:49.735 14:07:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.735 14:07:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.735 ************************************ 00:05:49.735 END TEST non_locking_app_on_locked_coremask 00:05:49.735 ************************************ 00:05:49.735 14:07:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:49.735 14:07:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.735 14:07:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.735 14:07:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.735 ************************************ 00:05:49.735 START TEST locking_app_on_unlocked_coremask 00:05:49.735 ************************************ 00:05:49.735 14:07:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:49.735 14:07:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1449234 00:05:49.735 14:07:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1449234 /var/tmp/spdk.sock 00:05:49.735 14:07:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:49.735 14:07:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1449234 ']' 00:05:49.735 14:07:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.735 14:07:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.735 14:07:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.735 14:07:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.735 14:07:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.995 [2024-12-10 14:07:50.493708] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:49.995 [2024-12-10 14:07:50.493752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449234 ] 00:05:49.995 [2024-12-10 14:07:50.574672] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.995 [2024-12-10 14:07:50.574701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.995 [2024-12-10 14:07:50.612819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.931 14:07:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.931 14:07:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:50.931 14:07:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:50.931 14:07:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1449258 00:05:50.931 14:07:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1449258 /var/tmp/spdk2.sock 00:05:50.931 14:07:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1449258 ']' 00:05:50.931 14:07:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.931 14:07:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.931 14:07:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.931 14:07:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.931 14:07:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.931 [2024-12-10 14:07:51.371374] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:50.931 [2024-12-10 14:07:51.371426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449258 ] 00:05:50.931 [2024-12-10 14:07:51.467231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.931 [2024-12-10 14:07:51.546980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.499 14:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.499 14:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:51.499 14:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1449258 00:05:51.499 14:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1449258 00:05:51.499 14:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.067 lslocks: write error 00:05:52.067 14:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1449234 00:05:52.067 14:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1449234 ']' 00:05:52.067 14:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1449234 00:05:52.067 14:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:52.067 14:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.067 14:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1449234 00:05:52.067 14:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.067 14:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.067 14:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1449234' 00:05:52.067 killing process with pid 1449234 00:05:52.067 14:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1449234 00:05:52.067 14:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1449234 00:05:53.005 14:07:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1449258 00:05:53.005 14:07:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1449258 ']' 00:05:53.005 14:07:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1449258 00:05:53.005 14:07:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.005 14:07:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.005 14:07:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1449258 00:05:53.005 14:07:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.005 14:07:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.005 14:07:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1449258' 00:05:53.005 killing process with pid 1449258 00:05:53.005 14:07:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1449258 00:05:53.005 14:07:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1449258 00:05:53.005 00:05:53.005 real 0m3.295s 00:05:53.005 user 0m3.546s 00:05:53.005 sys 0m0.980s 00:05:53.005 14:07:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.005 14:07:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.005 ************************************ 00:05:53.005 END TEST locking_app_on_unlocked_coremask 00:05:53.005 ************************************ 00:05:53.264 14:07:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:53.264 14:07:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.264 14:07:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.264 14:07:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.264 ************************************ 00:05:53.264 START TEST locking_app_on_locked_coremask 00:05:53.264 ************************************ 00:05:53.264 14:07:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:53.264 14:07:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1449744 00:05:53.264 14:07:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1449744 /var/tmp/spdk.sock 00:05:53.264 14:07:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.264 14:07:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1449744 ']' 00:05:53.264 14:07:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.264 14:07:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.264 14:07:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.264 14:07:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.264 14:07:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.264 [2024-12-10 14:07:53.853212] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:53.264 [2024-12-10 14:07:53.853265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449744 ] 00:05:53.264 [2024-12-10 14:07:53.932603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.264 [2024-12-10 14:07:53.971712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.200 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.201 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:54.201 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:54.201 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1449967 00:05:54.201 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1449967 /var/tmp/spdk2.sock 00:05:54.201 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:54.201 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1449967 /var/tmp/spdk2.sock 00:05:54.201 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:54.201 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.201 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:54.201 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.201 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1449967 /var/tmp/spdk2.sock 00:05:54.201 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1449967 ']' 00:05:54.201 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.201 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.201 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.201 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.201 14:07:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.201 [2024-12-10 14:07:54.739878] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:54.201 [2024-12-10 14:07:54.739926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449967 ] 00:05:54.201 [2024-12-10 14:07:54.835812] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1449744 has claimed it. 00:05:54.201 [2024-12-10 14:07:54.835855] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:54.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1449967) - No such process 00:05:54.768 ERROR: process (pid: 1449967) is no longer running 00:05:54.768 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.768 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:54.768 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:54.768 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.768 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:54.768 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.768 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1449744 00:05:54.768 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1449744 00:05:54.768 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.026 lslocks: write error 00:05:55.026 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1449744 00:05:55.026 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1449744 ']' 00:05:55.026 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1449744 00:05:55.026 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.026 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.026 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1449744 00:05:55.285 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.285 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.285 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1449744' 00:05:55.285 killing process with pid 1449744 00:05:55.285 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1449744 00:05:55.285 14:07:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1449744 00:05:55.544 00:05:55.544 real 0m2.291s 00:05:55.544 user 0m2.554s 00:05:55.544 sys 0m0.644s 00:05:55.544 14:07:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.544 14:07:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.544 ************************************ 00:05:55.544 END TEST locking_app_on_locked_coremask 00:05:55.544 ************************************ 00:05:55.544 14:07:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:55.544 14:07:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.544 14:07:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.544 14:07:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.544 ************************************ 00:05:55.544 START TEST locking_overlapped_coremask 00:05:55.544 ************************************ 00:05:55.544 14:07:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:55.544 14:07:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1450226 00:05:55.544 14:07:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1450226 /var/tmp/spdk.sock 00:05:55.544 14:07:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:55.544 14:07:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1450226 ']' 00:05:55.544 14:07:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.544 14:07:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.544 14:07:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.544 14:07:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.544 14:07:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.544 [2024-12-10 14:07:56.210071] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:55.544 [2024-12-10 14:07:56.210113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1450226 ] 00:05:55.803 [2024-12-10 14:07:56.288297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.803 [2024-12-10 14:07:56.330973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.803 [2024-12-10 14:07:56.331081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.803 [2024-12-10 14:07:56.331082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1450318 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1450318 /var/tmp/spdk2.sock 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1450318 /var/tmp/spdk2.sock 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1450318 /var/tmp/spdk2.sock 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1450318 ']' 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.370 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.370 [2024-12-10 14:07:57.102510] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:56.370 [2024-12-10 14:07:57.102560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1450318 ] 00:05:56.629 [2024-12-10 14:07:57.205629] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1450226 has claimed it. 00:05:56.629 [2024-12-10 14:07:57.205669] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:57.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1450318) - No such process 00:05:57.198 ERROR: process (pid: 1450318) is no longer running 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1450226 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1450226 ']' 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1450226 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1450226 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1450226' 00:05:57.198 killing process with pid 1450226 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1450226 00:05:57.198 14:07:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1450226 00:05:57.457 00:05:57.457 real 0m1.949s 00:05:57.457 user 0m5.645s 00:05:57.457 sys 0m0.424s 00:05:57.457 14:07:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.457 14:07:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.457 ************************************ 00:05:57.457 END TEST locking_overlapped_coremask 00:05:57.457 ************************************ 00:05:57.457 14:07:58 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:57.457 14:07:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.457 14:07:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.457 14:07:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.457 ************************************ 00:05:57.457 START TEST locking_overlapped_coremask_via_rpc 00:05:57.457 ************************************ 00:05:57.457 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:57.457 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1450502 00:05:57.457 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1450502 /var/tmp/spdk.sock 00:05:57.457 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:57.457 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1450502 ']' 00:05:57.457 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.457 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.457 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.457 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.457 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.716 [2024-12-10 14:07:58.228412] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:57.716 [2024-12-10 14:07:58.228454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1450502 ] 00:05:57.716 [2024-12-10 14:07:58.309345] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.716 [2024-12-10 14:07:58.309375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.716 [2024-12-10 14:07:58.348236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.716 [2024-12-10 14:07:58.348326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.716 [2024-12-10 14:07:58.348326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.976 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.976 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:57.976 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1450680 00:05:57.976 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:57.976 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1450680 /var/tmp/spdk2.sock 00:05:57.976 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1450680 ']' 00:05:57.976 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.976 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.976 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.976 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.976 14:07:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.976 [2024-12-10 14:07:58.628709] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:05:57.976 [2024-12-10 14:07:58.628762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1450680 ] 00:05:58.235 [2024-12-10 14:07:58.731287] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.235 [2024-12-10 14:07:58.731318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.235 [2024-12-10 14:07:58.813072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.235 [2024-12-10 14:07:58.816266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.235 [2024-12-10 14:07:58.816267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.803 [2024-12-10 14:07:59.488291] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1450502 has claimed it. 00:05:58.803 request: 00:05:58.803 { 00:05:58.803 "method": "framework_enable_cpumask_locks", 00:05:58.803 "req_id": 1 00:05:58.803 } 00:05:58.803 Got JSON-RPC error response 00:05:58.803 response: 00:05:58.803 { 00:05:58.803 "code": -32603, 00:05:58.803 "message": "Failed to claim CPU core: 2" 00:05:58.803 } 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1450502 /var/tmp/spdk.sock 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1450502 ']' 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.803 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.062 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.062 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.062 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1450680 /var/tmp/spdk2.sock 00:05:59.062 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1450680 ']' 00:05:59.062 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.062 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.062 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.062 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.062 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.322 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.322 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.322 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:59.322 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.322 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.322 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.322 00:05:59.322 real 0m1.742s 00:05:59.322 user 0m0.856s 00:05:59.322 sys 0m0.128s 00:05:59.322 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.322 14:07:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.322 ************************************ 00:05:59.322 END TEST locking_overlapped_coremask_via_rpc 00:05:59.322 ************************************ 00:05:59.322 14:07:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:59.322 14:07:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1450502 ]] 00:05:59.322 14:07:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1450502 00:05:59.322 14:07:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1450502 ']' 00:05:59.322 14:07:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1450502 00:05:59.322 14:07:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:59.322 14:07:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.322 14:07:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1450502 00:05:59.322 14:08:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.322 14:08:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.322 14:08:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1450502' 00:05:59.322 killing process with pid 1450502 00:05:59.322 14:08:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1450502 00:05:59.322 14:08:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1450502 00:05:59.581 14:08:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1450680 ]] 00:05:59.581 14:08:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1450680 00:05:59.581 14:08:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1450680 ']' 00:05:59.581 14:08:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1450680 00:05:59.581 14:08:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:59.841 14:08:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.841 14:08:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1450680 00:05:59.841 14:08:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:59.841 14:08:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:59.841 14:08:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1450680' 00:05:59.841 killing process with pid 1450680 00:05:59.841 14:08:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1450680 00:05:59.841 14:08:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1450680 00:06:00.101 14:08:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:00.101 14:08:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:00.101 14:08:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1450502 ]] 00:06:00.101 14:08:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1450502 00:06:00.101 14:08:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1450502 ']' 00:06:00.101 14:08:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1450502 00:06:00.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1450502) - No such process 00:06:00.101 14:08:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1450502 is not found' 00:06:00.101 Process with pid 1450502 is not found 00:06:00.101 14:08:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1450680 ]] 00:06:00.101 14:08:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1450680 00:06:00.101 14:08:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1450680 ']' 00:06:00.101 14:08:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1450680 00:06:00.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1450680) - No such process 00:06:00.101 14:08:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1450680 is not found' 00:06:00.101 Process with pid 1450680 is not found 00:06:00.101 14:08:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:00.101 00:06:00.101 real 0m15.683s 00:06:00.101 user 0m27.605s 00:06:00.101 sys 0m4.938s 00:06:00.101 14:08:00 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.101 14:08:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.101 ************************************ 00:06:00.101 END TEST cpu_locks 00:06:00.101 ************************************ 00:06:00.101 00:06:00.101 real 0m40.693s 00:06:00.101 user 1m18.079s 00:06:00.101 sys 0m8.440s 00:06:00.101 14:08:00 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.101 14:08:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.101 ************************************ 00:06:00.101 END TEST event 00:06:00.101 ************************************ 00:06:00.101 14:08:00 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:00.101 14:08:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.101 14:08:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.101 14:08:00 -- common/autotest_common.sh@10 -- # set +x 00:06:00.101 ************************************ 00:06:00.102 START TEST thread 00:06:00.102 ************************************ 00:06:00.102 14:08:00 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:00.361 * Looking for test storage... 00:06:00.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:00.361 14:08:00 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:00.361 14:08:00 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:00.361 14:08:00 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:00.361 14:08:00 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:00.361 14:08:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.361 14:08:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.361 14:08:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.361 14:08:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.361 14:08:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.361 14:08:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.361 14:08:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.361 14:08:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.361 14:08:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.361 14:08:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.361 14:08:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.361 14:08:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:00.361 14:08:00 thread -- scripts/common.sh@345 -- # : 1 00:06:00.361 14:08:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.361 14:08:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.362 14:08:00 thread -- scripts/common.sh@365 -- # decimal 1 00:06:00.362 14:08:00 thread -- scripts/common.sh@353 -- # local d=1 00:06:00.362 14:08:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.362 14:08:00 thread -- scripts/common.sh@355 -- # echo 1 00:06:00.362 14:08:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.362 14:08:00 thread -- scripts/common.sh@366 -- # decimal 2 00:06:00.362 14:08:00 thread -- scripts/common.sh@353 -- # local d=2 00:06:00.362 14:08:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.362 14:08:00 thread -- scripts/common.sh@355 -- # echo 2 00:06:00.362 14:08:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.362 14:08:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.362 14:08:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.362 14:08:00 thread -- scripts/common.sh@368 -- # return 0 00:06:00.362 14:08:00 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.362 14:08:00 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:00.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.362 --rc genhtml_branch_coverage=1 00:06:00.362 --rc genhtml_function_coverage=1 00:06:00.362 --rc genhtml_legend=1 00:06:00.362 --rc geninfo_all_blocks=1 00:06:00.362 --rc geninfo_unexecuted_blocks=1 00:06:00.362 00:06:00.362 ' 00:06:00.362 14:08:00 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:00.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.362 --rc genhtml_branch_coverage=1 00:06:00.362 --rc genhtml_function_coverage=1 00:06:00.362 --rc genhtml_legend=1 00:06:00.362 --rc geninfo_all_blocks=1 00:06:00.362 --rc geninfo_unexecuted_blocks=1 00:06:00.362 00:06:00.362 ' 00:06:00.362 14:08:00 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:00.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.362 --rc genhtml_branch_coverage=1 00:06:00.362 --rc genhtml_function_coverage=1 00:06:00.362 --rc genhtml_legend=1 00:06:00.362 --rc geninfo_all_blocks=1 00:06:00.362 --rc geninfo_unexecuted_blocks=1 00:06:00.362 00:06:00.362 ' 00:06:00.362 14:08:00 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:00.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.362 --rc genhtml_branch_coverage=1 00:06:00.362 --rc genhtml_function_coverage=1 00:06:00.362 --rc genhtml_legend=1 00:06:00.362 --rc geninfo_all_blocks=1 00:06:00.362 --rc geninfo_unexecuted_blocks=1 00:06:00.362 00:06:00.362 ' 00:06:00.362 14:08:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:00.362 14:08:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:00.362 14:08:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.362 14:08:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.362 ************************************ 00:06:00.362 START TEST thread_poller_perf 00:06:00.362 ************************************ 00:06:00.362 14:08:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:00.362 [2024-12-10 14:08:01.019447] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:06:00.362 [2024-12-10 14:08:01.019520] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1451076 ] 00:06:00.362 [2024-12-10 14:08:01.090965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.621 [2024-12-10 14:08:01.130201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.621 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:01.558 [2024-12-10T13:08:02.298Z] ====================================== 00:06:01.558 [2024-12-10T13:08:02.298Z] busy:2104607964 (cyc) 00:06:01.558 [2024-12-10T13:08:02.298Z] total_run_count: 422000 00:06:01.558 [2024-12-10T13:08:02.298Z] tsc_hz: 2100000000 (cyc) 00:06:01.558 [2024-12-10T13:08:02.298Z] ====================================== 00:06:01.558 [2024-12-10T13:08:02.298Z] poller_cost: 4987 (cyc), 2374 (nsec) 00:06:01.558 00:06:01.558 real 0m1.172s 00:06:01.558 user 0m1.095s 00:06:01.558 sys 0m0.073s 00:06:01.558 14:08:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.558 14:08:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.558 ************************************ 00:06:01.558 END TEST thread_poller_perf 00:06:01.558 ************************************ 00:06:01.558 14:08:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:01.558 14:08:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:01.558 14:08:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.558 14:08:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.558 ************************************ 00:06:01.558 START TEST thread_poller_perf 00:06:01.558 ************************************ 00:06:01.558 14:08:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:01.558 [2024-12-10 14:08:02.263116] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:06:01.558 [2024-12-10 14:08:02.263183] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1451317 ] 00:06:01.817 [2024-12-10 14:08:02.343326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.817 [2024-12-10 14:08:02.381740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.817 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:02.755 [2024-12-10T13:08:03.495Z] ====================================== 00:06:02.755 [2024-12-10T13:08:03.495Z] busy:2101333004 (cyc) 00:06:02.755 [2024-12-10T13:08:03.495Z] total_run_count: 5182000 00:06:02.755 [2024-12-10T13:08:03.495Z] tsc_hz: 2100000000 (cyc) 00:06:02.755 [2024-12-10T13:08:03.495Z] ====================================== 00:06:02.755 [2024-12-10T13:08:03.495Z] poller_cost: 405 (cyc), 192 (nsec) 00:06:02.755 00:06:02.755 real 0m1.176s 00:06:02.755 user 0m1.092s 00:06:02.755 sys 0m0.079s 00:06:02.755 14:08:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.755 14:08:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.755 ************************************ 00:06:02.755 END TEST thread_poller_perf 00:06:02.755 ************************************ 00:06:02.755 14:08:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:02.755 00:06:02.755 real 0m2.665s 00:06:02.755 user 0m2.349s 00:06:02.755 sys 0m0.329s 00:06:02.755 14:08:03 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.755 14:08:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.755 ************************************ 00:06:02.755 END TEST thread 00:06:02.755 ************************************ 00:06:02.755 14:08:03 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:02.755 14:08:03 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:02.755 14:08:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.755 14:08:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.755 14:08:03 -- common/autotest_common.sh@10 -- # set +x 00:06:03.017 ************************************ 00:06:03.017 START TEST app_cmdline 00:06:03.017 ************************************ 00:06:03.017 14:08:03 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:03.017 * Looking for test storage... 00:06:03.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:03.017 14:08:03 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:03.017 14:08:03 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:03.017 14:08:03 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:03.017 14:08:03 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:03.017 14:08:03 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.017 14:08:03 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.017 14:08:03 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.017 14:08:03 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.017 14:08:03 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.017 14:08:03 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.017 14:08:03 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.017 14:08:03 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.017 14:08:03 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.017 14:08:03 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.017 14:08:03 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.017 14:08:03 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:03.017 14:08:03 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:03.017 14:08:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.018 14:08:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.018 14:08:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:03.018 14:08:03 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:03.018 14:08:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.018 14:08:03 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:03.018 14:08:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.018 14:08:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:03.018 14:08:03 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:03.018 14:08:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.018 14:08:03 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:03.018 14:08:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.018 14:08:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.018 14:08:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.018 14:08:03 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:03.018 14:08:03 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.018 14:08:03 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:03.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.018 --rc genhtml_branch_coverage=1 00:06:03.018 --rc genhtml_function_coverage=1 00:06:03.018 --rc genhtml_legend=1 00:06:03.018 --rc geninfo_all_blocks=1 00:06:03.018 --rc geninfo_unexecuted_blocks=1 00:06:03.018 00:06:03.018 ' 00:06:03.018 14:08:03 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:03.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.018 --rc genhtml_branch_coverage=1 00:06:03.018 --rc genhtml_function_coverage=1 00:06:03.018 --rc genhtml_legend=1 00:06:03.018 --rc geninfo_all_blocks=1 00:06:03.018 --rc geninfo_unexecuted_blocks=1 00:06:03.018 00:06:03.018 ' 00:06:03.018 14:08:03 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:03.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.018 --rc genhtml_branch_coverage=1 00:06:03.018 --rc genhtml_function_coverage=1 00:06:03.018 --rc genhtml_legend=1 00:06:03.018 --rc geninfo_all_blocks=1 00:06:03.018 --rc geninfo_unexecuted_blocks=1 00:06:03.018 00:06:03.018 ' 00:06:03.018 14:08:03 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:03.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.018 --rc genhtml_branch_coverage=1 00:06:03.018 --rc genhtml_function_coverage=1 00:06:03.018 --rc genhtml_legend=1 00:06:03.018 --rc geninfo_all_blocks=1 00:06:03.018 --rc geninfo_unexecuted_blocks=1 00:06:03.018 00:06:03.018 ' 00:06:03.018 14:08:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:03.018 14:08:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1451610 00:06:03.018 14:08:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1451610 00:06:03.018 14:08:03 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:03.018 14:08:03 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1451610 ']' 00:06:03.018 14:08:03 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.018 14:08:03 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.018 14:08:03 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.018 14:08:03 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.018 14:08:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:03.018 [2024-12-10 14:08:03.752700] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:06:03.018 [2024-12-10 14:08:03.752747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1451610 ] 00:06:03.379 [2024-12-10 14:08:03.831814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.379 [2024-12-10 14:08:03.870433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.379 14:08:04 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.379 14:08:04 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:03.379 14:08:04 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:03.640 { 00:06:03.640 "version": "SPDK v25.01-pre git sha1 02d0d9b38", 00:06:03.640 "fields": { 00:06:03.640 "major": 25, 00:06:03.640 "minor": 1, 00:06:03.640 "patch": 0, 00:06:03.640 "suffix": "-pre", 00:06:03.640 "commit": "02d0d9b38" 00:06:03.640 } 00:06:03.640 } 00:06:03.640 14:08:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:03.640 14:08:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:03.640 14:08:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:03.640 14:08:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:03.640 14:08:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:03.640 14:08:04 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.640 14:08:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:03.640 14:08:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:03.640 14:08:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:03.640 14:08:04 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.640 14:08:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:03.640 14:08:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:03.640 14:08:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.640 14:08:04 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:03.640 14:08:04 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.640 14:08:04 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:03.640 14:08:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.640 14:08:04 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:03.640 14:08:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.640 14:08:04 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:03.640 14:08:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.640 14:08:04 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:03.640 14:08:04 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:03.640 14:08:04 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.898 request: 00:06:03.898 { 00:06:03.898 "method": "env_dpdk_get_mem_stats", 00:06:03.898 "req_id": 1 00:06:03.898 } 00:06:03.898 Got JSON-RPC error response 00:06:03.898 response: 00:06:03.898 { 00:06:03.898 "code": -32601, 00:06:03.898 "message": "Method not found" 00:06:03.898 } 00:06:03.898 14:08:04 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:03.898 14:08:04 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:03.898 14:08:04 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:03.898 14:08:04 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:03.898 14:08:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1451610 00:06:03.898 14:08:04 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1451610 ']' 00:06:03.898 14:08:04 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1451610 00:06:03.898 14:08:04 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:03.898 14:08:04 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.898 14:08:04 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1451610 00:06:03.898 14:08:04 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.898 14:08:04 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.898 14:08:04 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1451610' 00:06:03.898 killing process with pid 1451610 00:06:03.898 14:08:04 app_cmdline -- common/autotest_common.sh@973 -- # kill 1451610 00:06:03.898 14:08:04 app_cmdline -- common/autotest_common.sh@978 -- # wait 1451610 00:06:04.157 00:06:04.157 real 0m1.337s 00:06:04.157 user 0m1.568s 00:06:04.157 sys 0m0.427s 00:06:04.157 14:08:04 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.157 14:08:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.157 ************************************ 00:06:04.157 END TEST app_cmdline 00:06:04.157 ************************************ 00:06:04.157 14:08:04 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:04.157 14:08:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.157 14:08:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.157 14:08:04 -- common/autotest_common.sh@10 -- # set +x 00:06:04.416 ************************************ 00:06:04.416 START TEST version 00:06:04.416 ************************************ 00:06:04.416 14:08:04 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:04.416 * Looking for test storage... 00:06:04.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:04.416 14:08:05 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:04.416 14:08:05 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:04.416 14:08:05 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:04.416 14:08:05 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:04.416 14:08:05 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.416 14:08:05 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.416 14:08:05 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.416 14:08:05 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.416 14:08:05 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.416 14:08:05 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.416 14:08:05 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.416 14:08:05 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.416 14:08:05 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.416 14:08:05 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.416 14:08:05 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.416 14:08:05 version -- scripts/common.sh@344 -- # case "$op" in 00:06:04.416 14:08:05 version -- scripts/common.sh@345 -- # : 1 00:06:04.416 14:08:05 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.416 14:08:05 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.416 14:08:05 version -- scripts/common.sh@365 -- # decimal 1 00:06:04.416 14:08:05 version -- scripts/common.sh@353 -- # local d=1 00:06:04.416 14:08:05 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.416 14:08:05 version -- scripts/common.sh@355 -- # echo 1 00:06:04.416 14:08:05 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.416 14:08:05 version -- scripts/common.sh@366 -- # decimal 2 00:06:04.416 14:08:05 version -- scripts/common.sh@353 -- # local d=2 00:06:04.416 14:08:05 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.416 14:08:05 version -- scripts/common.sh@355 -- # echo 2 00:06:04.416 14:08:05 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.416 14:08:05 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.416 14:08:05 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.416 14:08:05 version -- scripts/common.sh@368 -- # return 0 00:06:04.416 14:08:05 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.416 14:08:05 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:04.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.416 --rc genhtml_branch_coverage=1 00:06:04.416 --rc genhtml_function_coverage=1 00:06:04.416 --rc genhtml_legend=1 00:06:04.416 --rc geninfo_all_blocks=1 00:06:04.416 --rc geninfo_unexecuted_blocks=1 00:06:04.416 00:06:04.416 ' 00:06:04.416 14:08:05 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:04.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.416 --rc genhtml_branch_coverage=1 00:06:04.416 --rc genhtml_function_coverage=1 00:06:04.416 --rc genhtml_legend=1 00:06:04.416 --rc geninfo_all_blocks=1 00:06:04.416 --rc geninfo_unexecuted_blocks=1 00:06:04.416 00:06:04.416 ' 00:06:04.416 14:08:05 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:04.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.416 --rc genhtml_branch_coverage=1 00:06:04.416 --rc genhtml_function_coverage=1 00:06:04.416 --rc genhtml_legend=1 00:06:04.416 --rc geninfo_all_blocks=1 00:06:04.416 --rc geninfo_unexecuted_blocks=1 00:06:04.416 00:06:04.416 ' 00:06:04.416 14:08:05 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:04.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.416 --rc genhtml_branch_coverage=1 00:06:04.416 --rc genhtml_function_coverage=1 00:06:04.416 --rc genhtml_legend=1 00:06:04.416 --rc geninfo_all_blocks=1 00:06:04.416 --rc geninfo_unexecuted_blocks=1 00:06:04.416 00:06:04.416 ' 00:06:04.416 14:08:05 version -- app/version.sh@17 -- # get_header_version major 00:06:04.416 14:08:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:04.416 14:08:05 version -- app/version.sh@14 -- # cut -f2 00:06:04.416 14:08:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.416 14:08:05 version -- app/version.sh@17 -- # major=25 00:06:04.416 14:08:05 version -- app/version.sh@18 -- # get_header_version minor 00:06:04.416 14:08:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:04.416 14:08:05 version -- app/version.sh@14 -- # cut -f2 00:06:04.416 14:08:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.416 14:08:05 version -- app/version.sh@18 -- # minor=1 00:06:04.416 14:08:05 version -- app/version.sh@19 -- # get_header_version patch 00:06:04.416 14:08:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:04.416 14:08:05 version -- app/version.sh@14 -- # cut -f2 00:06:04.416 14:08:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.416 14:08:05 version -- app/version.sh@19 -- # patch=0 00:06:04.416 14:08:05 version -- app/version.sh@20 -- # get_header_version suffix 00:06:04.416 14:08:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:04.416 14:08:05 version -- app/version.sh@14 -- # cut -f2 00:06:04.416 14:08:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.417 14:08:05 version -- app/version.sh@20 -- # suffix=-pre 00:06:04.417 14:08:05 version -- app/version.sh@22 -- # version=25.1 00:06:04.417 14:08:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:04.417 14:08:05 version -- app/version.sh@28 -- # version=25.1rc0 00:06:04.417 14:08:05 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:04.417 14:08:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:04.674 14:08:05 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:04.675 14:08:05 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:04.675 00:06:04.675 real 0m0.240s 00:06:04.675 user 0m0.152s 00:06:04.675 sys 0m0.130s 00:06:04.675 14:08:05 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.675 14:08:05 version -- common/autotest_common.sh@10 -- # set +x 00:06:04.675 ************************************ 00:06:04.675 END TEST version 00:06:04.675 ************************************ 00:06:04.675 14:08:05 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:04.675 14:08:05 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:04.675 14:08:05 -- spdk/autotest.sh@194 -- # uname -s 00:06:04.675 14:08:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:04.675 14:08:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:04.675 14:08:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:04.675 14:08:05 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:04.675 14:08:05 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:04.675 14:08:05 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:04.675 14:08:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:04.675 14:08:05 -- common/autotest_common.sh@10 -- # set +x 00:06:04.675 14:08:05 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:04.675 14:08:05 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:04.675 14:08:05 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:04.675 14:08:05 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:04.675 14:08:05 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:04.675 14:08:05 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:04.675 14:08:05 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:04.675 14:08:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:04.675 14:08:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.675 14:08:05 -- common/autotest_common.sh@10 -- # set +x 00:06:04.675 ************************************ 00:06:04.675 START TEST nvmf_tcp 00:06:04.675 ************************************ 00:06:04.675 14:08:05 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:04.675 * Looking for test storage... 00:06:04.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:04.675 14:08:05 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:04.675 14:08:05 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:04.675 14:08:05 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:04.933 14:08:05 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:04.933 14:08:05 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.933 14:08:05 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.933 14:08:05 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.933 14:08:05 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.933 14:08:05 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.933 14:08:05 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.933 14:08:05 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.933 14:08:05 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.933 14:08:05 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.933 14:08:05 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.933 14:08:05 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.934 14:08:05 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:04.934 14:08:05 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:04.934 14:08:05 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.934 14:08:05 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.934 14:08:05 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:04.934 14:08:05 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:04.934 14:08:05 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.934 14:08:05 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:04.934 14:08:05 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.934 14:08:05 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:04.934 14:08:05 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:04.934 14:08:05 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.934 14:08:05 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:04.934 14:08:05 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.934 14:08:05 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.934 14:08:05 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.934 14:08:05 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:04.934 14:08:05 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.934 14:08:05 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:04.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.934 --rc genhtml_branch_coverage=1 00:06:04.934 --rc genhtml_function_coverage=1 00:06:04.934 --rc genhtml_legend=1 00:06:04.934 --rc geninfo_all_blocks=1 00:06:04.934 --rc geninfo_unexecuted_blocks=1 00:06:04.934 00:06:04.934 ' 00:06:04.934 14:08:05 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:04.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.934 --rc genhtml_branch_coverage=1 00:06:04.934 --rc genhtml_function_coverage=1 00:06:04.934 --rc genhtml_legend=1 00:06:04.934 --rc geninfo_all_blocks=1 00:06:04.934 --rc geninfo_unexecuted_blocks=1 00:06:04.934 00:06:04.934 ' 00:06:04.934 14:08:05 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:04.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.934 --rc genhtml_branch_coverage=1 00:06:04.934 --rc genhtml_function_coverage=1 00:06:04.934 --rc genhtml_legend=1 00:06:04.934 --rc geninfo_all_blocks=1 00:06:04.934 --rc geninfo_unexecuted_blocks=1 00:06:04.934 00:06:04.934 ' 00:06:04.934 14:08:05 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:04.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.934 --rc genhtml_branch_coverage=1 00:06:04.934 --rc genhtml_function_coverage=1 00:06:04.934 --rc genhtml_legend=1 00:06:04.934 --rc geninfo_all_blocks=1 00:06:04.934 --rc geninfo_unexecuted_blocks=1 00:06:04.934 00:06:04.934 ' 00:06:04.934 14:08:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:04.934 14:08:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:04.934 14:08:05 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:04.934 14:08:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:04.934 14:08:05 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.934 14:08:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.934 ************************************ 00:06:04.934 START TEST nvmf_target_core 00:06:04.934 ************************************ 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:04.934 * Looking for test storage... 00:06:04.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:04.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.934 --rc genhtml_branch_coverage=1 00:06:04.934 --rc genhtml_function_coverage=1 00:06:04.934 --rc genhtml_legend=1 00:06:04.934 --rc geninfo_all_blocks=1 00:06:04.934 --rc geninfo_unexecuted_blocks=1 00:06:04.934 00:06:04.934 ' 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:04.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.934 --rc genhtml_branch_coverage=1 00:06:04.934 --rc genhtml_function_coverage=1 00:06:04.934 --rc genhtml_legend=1 00:06:04.934 --rc geninfo_all_blocks=1 00:06:04.934 --rc geninfo_unexecuted_blocks=1 00:06:04.934 00:06:04.934 ' 00:06:04.934 14:08:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:04.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.934 --rc genhtml_branch_coverage=1 00:06:04.934 --rc genhtml_function_coverage=1 00:06:04.934 --rc genhtml_legend=1 00:06:04.934 --rc geninfo_all_blocks=1 00:06:04.934 --rc geninfo_unexecuted_blocks=1 00:06:04.934 00:06:04.934 ' 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:05.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.194 --rc genhtml_branch_coverage=1 00:06:05.194 --rc genhtml_function_coverage=1 00:06:05.194 --rc genhtml_legend=1 00:06:05.194 --rc geninfo_all_blocks=1 00:06:05.194 --rc geninfo_unexecuted_blocks=1 00:06:05.194 00:06:05.194 ' 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:05.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:05.194 ************************************ 00:06:05.194 START TEST nvmf_abort 00:06:05.194 ************************************ 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:05.194 * Looking for test storage... 00:06:05.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:05.194 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:05.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.195 --rc genhtml_branch_coverage=1 00:06:05.195 --rc genhtml_function_coverage=1 00:06:05.195 --rc genhtml_legend=1 00:06:05.195 --rc geninfo_all_blocks=1 00:06:05.195 --rc geninfo_unexecuted_blocks=1 00:06:05.195 00:06:05.195 ' 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:05.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.195 --rc genhtml_branch_coverage=1 00:06:05.195 --rc genhtml_function_coverage=1 00:06:05.195 --rc genhtml_legend=1 00:06:05.195 --rc geninfo_all_blocks=1 00:06:05.195 --rc geninfo_unexecuted_blocks=1 00:06:05.195 00:06:05.195 ' 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:05.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.195 --rc genhtml_branch_coverage=1 00:06:05.195 --rc genhtml_function_coverage=1 00:06:05.195 --rc genhtml_legend=1 00:06:05.195 --rc geninfo_all_blocks=1 00:06:05.195 --rc geninfo_unexecuted_blocks=1 00:06:05.195 00:06:05.195 ' 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:05.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.195 --rc genhtml_branch_coverage=1 00:06:05.195 --rc genhtml_function_coverage=1 00:06:05.195 --rc genhtml_legend=1 00:06:05.195 --rc geninfo_all_blocks=1 00:06:05.195 --rc geninfo_unexecuted_blocks=1 00:06:05.195 00:06:05.195 ' 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.195 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.454 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:05.454 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:05.454 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.454 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.454 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:05.454 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.454 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:05.454 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:05.454 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.454 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.454 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:05.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:05.455 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:12.023 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:12.023 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:12.023 Found net devices under 0000:af:00.0: cvl_0_0 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:12.023 Found net devices under 0000:af:00.1: cvl_0_1 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:12.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:12.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:06:12.023 00:06:12.023 --- 10.0.0.2 ping statistics --- 00:06:12.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.023 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:06:12.023 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:12.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:12.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:06:12.023 00:06:12.023 --- 10.0.0.1 ping statistics --- 00:06:12.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.023 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1455769 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1455769 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1455769 ']' 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.282 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.282 [2024-12-10 14:08:12.859840] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:06:12.282 [2024-12-10 14:08:12.859883] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:12.282 [2024-12-10 14:08:12.943602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.282 [2024-12-10 14:08:12.983499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:12.282 [2024-12-10 14:08:12.983535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:12.282 [2024-12-10 14:08:12.983541] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:12.282 [2024-12-10 14:08:12.983547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:12.282 [2024-12-10 14:08:12.983552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:12.282 [2024-12-10 14:08:12.984824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.282 [2024-12-10 14:08:12.984928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.282 [2024-12-10 14:08:12.984929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.218 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.218 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:13.218 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:13.218 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.218 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.218 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:13.218 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.219 [2024-12-10 14:08:13.743205] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.219 Malloc0 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.219 Delay0 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.219 [2024-12-10 14:08:13.810637] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.219 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:13.219 [2024-12-10 14:08:13.941932] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:15.749 Initializing NVMe Controllers 00:06:15.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:15.749 controller IO queue size 128 less than required 00:06:15.749 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:15.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:15.749 Initialization complete. Launching workers. 00:06:15.749 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37661 00:06:15.749 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37722, failed to submit 62 00:06:15.749 success 37665, unsuccessful 57, failed 0 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:15.749 rmmod nvme_tcp 00:06:15.749 rmmod nvme_fabrics 00:06:15.749 rmmod nvme_keyring 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1455769 ']' 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1455769 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1455769 ']' 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1455769 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1455769 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1455769' 00:06:15.749 killing process with pid 1455769 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1455769 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1455769 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.749 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.654 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:17.913 00:06:17.913 real 0m12.652s 00:06:17.913 user 0m13.839s 00:06:17.913 sys 0m6.077s 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.913 ************************************ 00:06:17.913 END TEST nvmf_abort 00:06:17.913 ************************************ 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:17.913 ************************************ 00:06:17.913 START TEST nvmf_ns_hotplug_stress 00:06:17.913 ************************************ 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:17.913 * Looking for test storage... 00:06:17.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.913 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:17.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.914 --rc genhtml_branch_coverage=1 00:06:17.914 --rc genhtml_function_coverage=1 00:06:17.914 --rc genhtml_legend=1 00:06:17.914 --rc geninfo_all_blocks=1 00:06:17.914 --rc geninfo_unexecuted_blocks=1 00:06:17.914 00:06:17.914 ' 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:17.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.914 --rc genhtml_branch_coverage=1 00:06:17.914 --rc genhtml_function_coverage=1 00:06:17.914 --rc genhtml_legend=1 00:06:17.914 --rc geninfo_all_blocks=1 00:06:17.914 --rc geninfo_unexecuted_blocks=1 00:06:17.914 00:06:17.914 ' 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:17.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.914 --rc genhtml_branch_coverage=1 00:06:17.914 --rc genhtml_function_coverage=1 00:06:17.914 --rc genhtml_legend=1 00:06:17.914 --rc geninfo_all_blocks=1 00:06:17.914 --rc geninfo_unexecuted_blocks=1 00:06:17.914 00:06:17.914 ' 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:17.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.914 --rc genhtml_branch_coverage=1 00:06:17.914 --rc genhtml_function_coverage=1 00:06:17.914 --rc genhtml_legend=1 00:06:17.914 --rc geninfo_all_blocks=1 00:06:17.914 --rc geninfo_unexecuted_blocks=1 00:06:17.914 00:06:17.914 ' 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.914 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.174 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.174 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.174 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.174 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.174 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.174 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.174 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.174 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.174 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:18.174 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:18.174 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.174 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.174 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:18.174 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.174 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:18.174 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:18.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:18.175 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:24.749 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:24.749 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.749 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:24.750 Found net devices under 0000:af:00.0: cvl_0_0 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:24.750 Found net devices under 0000:af:00.1: cvl_0_1 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:24.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:24.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:06:24.750 00:06:24.750 --- 10.0.0.2 ping statistics --- 00:06:24.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.750 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:24.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:24.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:06:24.750 00:06:24.750 --- 10.0.0.1 ping statistics --- 00:06:24.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.750 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:24.750 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:25.009 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:25.009 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:25.009 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.009 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:25.009 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1460267 00:06:25.009 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:25.009 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1460267 00:06:25.009 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1460267 ']' 00:06:25.009 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.009 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.009 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.009 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.009 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:25.009 [2024-12-10 14:08:25.580143] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:06:25.009 [2024-12-10 14:08:25.580194] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:25.009 [2024-12-10 14:08:25.671092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.009 [2024-12-10 14:08:25.710018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:25.009 [2024-12-10 14:08:25.710055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:25.009 [2024-12-10 14:08:25.710061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:25.009 [2024-12-10 14:08:25.710067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:25.009 [2024-12-10 14:08:25.710072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:25.009 [2024-12-10 14:08:25.711346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.009 [2024-12-10 14:08:25.711475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.009 [2024-12-10 14:08:25.711476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.946 14:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.946 14:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:25.946 14:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:25.946 14:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:25.946 14:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:25.946 14:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:25.946 14:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:25.946 14:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:25.946 [2024-12-10 14:08:26.634061] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.946 14:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:26.206 14:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:26.464 [2024-12-10 14:08:27.027501] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:26.464 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:26.723 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:26.723 Malloc0 00:06:26.982 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:26.982 Delay0 00:06:26.982 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.241 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:27.500 NULL1 00:06:27.500 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:27.759 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1460758 00:06:27.759 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:27.759 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:27.759 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.759 Read completed with error (sct=0, sc=11) 00:06:27.759 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.018 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:28.018 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:28.277 true 00:06:28.277 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:28.277 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.214 14:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.214 14:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:29.214 14:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:29.473 true 00:06:29.473 14:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:29.473 14:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.733 14:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.993 14:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:29.993 14:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:29.993 true 00:06:29.993 14:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:29.993 14:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.371 14:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.371 14:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:31.371 14:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:31.630 true 00:06:31.630 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:31.630 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.890 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.890 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:31.890 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:32.149 true 00:06:32.149 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:32.149 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.527 14:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.527 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:33.527 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:33.527 true 00:06:33.786 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:33.786 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.353 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.611 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:34.611 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:34.869 true 00:06:34.869 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:34.869 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.128 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.386 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:35.386 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:35.386 true 00:06:35.386 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:35.386 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.763 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.763 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:36.763 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:37.022 true 00:06:37.022 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:37.022 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.958 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.958 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:37.958 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:38.217 true 00:06:38.217 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:38.217 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.217 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.476 14:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:38.476 14:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:38.735 true 00:06:38.735 14:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:38.735 14:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.111 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.111 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:40.111 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:40.111 true 00:06:40.370 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:40.370 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.937 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.196 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:41.196 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:41.477 true 00:06:41.477 14:08:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:41.477 14:08:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.855 14:08:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.856 14:08:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:41.856 14:08:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:42.114 true 00:06:42.114 14:08:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:42.114 14:08:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.050 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.309 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:43.309 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:43.568 true 00:06:43.568 14:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:43.568 14:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.503 14:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.503 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:44.503 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:44.762 true 00:06:44.762 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:44.762 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.020 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.020 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:45.020 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:45.279 true 00:06:45.279 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:45.279 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.654 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.654 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:46.654 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:46.913 true 00:06:46.913 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:46.913 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.848 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.848 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:47.848 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:48.107 true 00:06:48.107 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:48.107 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.366 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.624 14:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:48.624 14:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:48.624 true 00:06:48.624 14:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:48.624 14:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.001 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.001 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:50.002 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:50.260 true 00:06:50.260 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:50.260 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.197 14:08:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.197 14:08:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:51.197 14:08:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:51.455 true 00:06:51.455 14:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:51.455 14:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.714 14:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.972 14:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:51.972 14:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:51.972 true 00:06:51.972 14:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:51.972 14:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.350 14:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.350 14:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:53.350 14:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:53.609 true 00:06:53.609 14:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:53.609 14:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.543 14:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.543 14:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:54.543 14:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:54.801 true 00:06:54.801 14:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:54.801 14:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.059 14:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.059 14:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:55.059 14:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:55.318 true 00:06:55.318 14:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:55.318 14:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.695 14:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.695 14:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:56.695 14:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:56.954 true 00:06:56.954 14:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:56.954 14:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.889 14:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.889 14:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:57.889 14:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:57.889 Initializing NVMe Controllers 00:06:57.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:57.889 Controller IO queue size 128, less than required. 00:06:57.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:57.889 Controller IO queue size 128, less than required. 00:06:57.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:57.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:57.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:57.889 Initialization complete. Launching workers. 00:06:57.889 ======================================================== 00:06:57.889 Latency(us) 00:06:57.889 Device Information : IOPS MiB/s Average min max 00:06:57.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2077.55 1.01 40565.05 1853.56 1128241.96 00:06:57.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17290.11 8.44 7402.71 2283.17 368947.04 00:06:57.889 ======================================================== 00:06:57.889 Total : 19367.66 9.46 10960.00 1853.56 1128241.96 00:06:57.889 00:06:58.148 true 00:06:58.148 14:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1460758 00:06:58.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1460758) - No such process 00:06:58.148 14:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1460758 00:06:58.148 14:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.408 14:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.408 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:58.408 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:58.408 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:58.408 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.408 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:58.666 null0 00:06:58.666 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.666 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.667 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:58.925 null1 00:06:58.925 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.925 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.925 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:59.184 null2 00:06:59.184 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:59.184 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:59.184 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:59.184 null3 00:06:59.442 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:59.442 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:59.442 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:59.442 null4 00:06:59.442 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:59.442 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:59.442 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:59.700 null5 00:06:59.700 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:59.700 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:59.700 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:59.959 null6 00:06:59.959 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:59.959 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:59.959 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:00.219 null7 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1466339 1466340 1466341 1466343 1466346 1466347 1466349 1466352 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.219 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:00.478 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.478 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:00.478 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.478 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.478 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:00.478 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:00.478 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:00.478 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.479 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:00.738 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.738 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:00.738 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:00.738 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:00.738 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:00.738 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.738 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.738 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.997 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:01.256 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.256 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:01.256 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:01.256 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:01.256 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:01.256 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:01.256 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:01.256 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:01.515 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.774 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:02.033 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:02.033 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:02.033 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:02.033 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:02.033 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.033 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:02.033 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:02.033 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.292 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:02.293 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.293 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.293 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:02.551 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:02.551 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:02.551 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:02.551 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:02.551 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:02.551 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:02.551 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:02.551 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.551 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.551 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.552 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:02.552 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.552 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.552 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:02.552 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.552 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.552 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:02.552 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.552 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.552 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:02.552 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.552 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.810 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:02.810 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.810 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.810 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:02.810 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.810 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.810 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:02.810 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.810 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.810 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:02.810 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:02.811 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:02.811 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:02.811 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:02.811 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:02.811 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.811 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:02.811 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.069 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:03.414 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:03.414 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:03.414 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:03.414 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:03.414 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.414 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:03.414 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:03.414 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.414 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:03.757 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:03.757 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:03.757 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:03.757 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.757 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:03.757 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:03.757 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:03.757 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:04.016 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:04.275 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:04.275 rmmod nvme_tcp 00:07:04.275 rmmod nvme_fabrics 00:07:04.275 rmmod nvme_keyring 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1460267 ']' 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1460267 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1460267 ']' 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1460267 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1460267 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1460267' 00:07:04.540 killing process with pid 1460267 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1460267 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1460267 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:04.540 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:04.541 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:04.541 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:04.541 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:04.541 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:04.541 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:04.541 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:04.541 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.541 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.541 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.081 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:07.081 00:07:07.082 real 0m48.858s 00:07:07.082 user 3m15.022s 00:07:07.082 sys 0m16.039s 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:07.082 ************************************ 00:07:07.082 END TEST nvmf_ns_hotplug_stress 00:07:07.082 ************************************ 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:07.082 ************************************ 00:07:07.082 START TEST nvmf_delete_subsystem 00:07:07.082 ************************************ 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:07.082 * Looking for test storage... 00:07:07.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:07.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.082 --rc genhtml_branch_coverage=1 00:07:07.082 --rc genhtml_function_coverage=1 00:07:07.082 --rc genhtml_legend=1 00:07:07.082 --rc geninfo_all_blocks=1 00:07:07.082 --rc geninfo_unexecuted_blocks=1 00:07:07.082 00:07:07.082 ' 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:07.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.082 --rc genhtml_branch_coverage=1 00:07:07.082 --rc genhtml_function_coverage=1 00:07:07.082 --rc genhtml_legend=1 00:07:07.082 --rc geninfo_all_blocks=1 00:07:07.082 --rc geninfo_unexecuted_blocks=1 00:07:07.082 00:07:07.082 ' 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:07.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.082 --rc genhtml_branch_coverage=1 00:07:07.082 --rc genhtml_function_coverage=1 00:07:07.082 --rc genhtml_legend=1 00:07:07.082 --rc geninfo_all_blocks=1 00:07:07.082 --rc geninfo_unexecuted_blocks=1 00:07:07.082 00:07:07.082 ' 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:07.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.082 --rc genhtml_branch_coverage=1 00:07:07.082 --rc genhtml_function_coverage=1 00:07:07.082 --rc genhtml_legend=1 00:07:07.082 --rc geninfo_all_blocks=1 00:07:07.082 --rc geninfo_unexecuted_blocks=1 00:07:07.082 00:07:07.082 ' 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.082 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:07.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:07.083 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:13.656 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:13.656 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:13.656 Found net devices under 0000:af:00.0: cvl_0_0 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:13.656 Found net devices under 0000:af:00.1: cvl_0_1 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:13.656 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:13.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:07:13.657 00:07:13.657 --- 10.0.0.2 ping statistics --- 00:07:13.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.657 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:07:13.657 00:07:13.657 --- 10.0.0.1 ping statistics --- 00:07:13.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.657 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:13.657 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:13.916 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:13.916 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:13.916 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:13.916 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.916 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1471659 00:07:13.916 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:13.916 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1471659 00:07:13.916 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1471659 ']' 00:07:13.916 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.916 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.916 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.917 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.917 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.917 [2024-12-10 14:09:14.476733] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:07:13.917 [2024-12-10 14:09:14.476780] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.917 [2024-12-10 14:09:14.562039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.917 [2024-12-10 14:09:14.599602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.917 [2024-12-10 14:09:14.599639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.917 [2024-12-10 14:09:14.599646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.917 [2024-12-10 14:09:14.599652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.917 [2024-12-10 14:09:14.599657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.917 [2024-12-10 14:09:14.600862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.917 [2024-12-10 14:09:14.600863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.853 [2024-12-10 14:09:15.363396] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.853 [2024-12-10 14:09:15.383603] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.853 NULL1 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.853 Delay0 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1471828 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:14.853 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:14.853 [2024-12-10 14:09:15.495339] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:16.754 14:09:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:16.754 14:09:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.754 14:09:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 [2024-12-10 14:09:17.573127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db42c0 is same with the state(6) to be set 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Write completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.013 starting I/O failed: -6 00:07:17.013 Read completed with error (sct=0, sc=8) 00:07:17.014 Read completed with error (sct=0, sc=8) 00:07:17.014 Write completed with error (sct=0, sc=8) 00:07:17.014 Write completed with error (sct=0, sc=8) 00:07:17.014 starting I/O failed: -6 00:07:17.014 Read completed with error (sct=0, sc=8) 00:07:17.014 Read completed with error (sct=0, sc=8) 00:07:17.014 [2024-12-10 14:09:17.573904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efd5c00d490 is same with the state(6) to be set 00:07:17.949 [2024-12-10 14:09:18.547694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db59b0 is same with the state(6) to be set 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 [2024-12-10 14:09:18.575523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efd5c00d020 is same with the state(6) to be set 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 [2024-12-10 14:09:18.575702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efd5c000c40 is same with the state(6) to be set 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 [2024-12-10 14:09:18.575867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efd5c00d7c0 is same with the state(6) to be set 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Write completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 Read completed with error (sct=0, sc=8) 00:07:17.949 [2024-12-10 14:09:18.576782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db4780 is same with the state(6) to be set 00:07:17.949 Initializing NVMe Controllers 00:07:17.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:17.949 Controller IO queue size 128, less than required. 00:07:17.949 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:17.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:17.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:17.949 Initialization complete. Launching workers. 00:07:17.949 ======================================================== 00:07:17.949 Latency(us) 00:07:17.949 Device Information : IOPS MiB/s Average min max 00:07:17.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 154.64 0.08 897981.33 263.31 2002390.40 00:07:17.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 168.06 0.08 1106964.57 1296.07 2001610.18 00:07:17.949 ======================================================== 00:07:17.949 Total : 322.70 0.16 1006820.06 263.31 2002390.40 00:07:17.949 00:07:17.949 [2024-12-10 14:09:18.577113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db59b0 (9): Bad file descriptor 00:07:17.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:17.949 14:09:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.949 14:09:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:17.949 14:09:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1471828 00:07:17.949 14:09:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1471828 00:07:18.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1471828) - No such process 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1471828 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1471828 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1471828 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.517 [2024-12-10 14:09:19.108086] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1472378 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1472378 00:07:18.517 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:18.517 [2024-12-10 14:09:19.196667] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:19.083 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:19.083 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1472378 00:07:19.083 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:19.649 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:19.649 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1472378 00:07:19.649 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:19.908 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:19.908 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1472378 00:07:19.908 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:20.474 14:09:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:20.474 14:09:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1472378 00:07:20.474 14:09:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:21.039 14:09:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:21.039 14:09:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1472378 00:07:21.039 14:09:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:21.605 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:21.605 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1472378 00:07:21.605 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:21.863 Initializing NVMe Controllers 00:07:21.863 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:21.863 Controller IO queue size 128, less than required. 00:07:21.863 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:21.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:21.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:21.863 Initialization complete. Launching workers. 00:07:21.863 ======================================================== 00:07:21.863 Latency(us) 00:07:21.863 Device Information : IOPS MiB/s Average min max 00:07:21.863 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001989.69 1000110.38 1007578.29 00:07:21.863 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003628.69 1000162.08 1010699.51 00:07:21.863 ======================================================== 00:07:21.863 Total : 256.00 0.12 1002809.19 1000110.38 1010699.51 00:07:21.863 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1472378 00:07:22.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1472378) - No such process 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1472378 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:22.122 rmmod nvme_tcp 00:07:22.122 rmmod nvme_fabrics 00:07:22.122 rmmod nvme_keyring 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1471659 ']' 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1471659 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1471659 ']' 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1471659 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1471659 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1471659' 00:07:22.122 killing process with pid 1471659 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1471659 00:07:22.122 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1471659 00:07:22.381 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:22.381 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:22.381 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:22.381 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:22.381 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:22.381 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:22.381 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:22.381 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:22.381 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:22.381 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.381 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:22.381 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.285 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:24.285 00:07:24.285 real 0m17.603s 00:07:24.285 user 0m30.745s 00:07:24.285 sys 0m6.144s 00:07:24.285 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.285 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.285 ************************************ 00:07:24.285 END TEST nvmf_delete_subsystem 00:07:24.285 ************************************ 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:24.543 ************************************ 00:07:24.543 START TEST nvmf_host_management 00:07:24.543 ************************************ 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:24.543 * Looking for test storage... 00:07:24.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:24.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.543 --rc genhtml_branch_coverage=1 00:07:24.543 --rc genhtml_function_coverage=1 00:07:24.543 --rc genhtml_legend=1 00:07:24.543 --rc geninfo_all_blocks=1 00:07:24.543 --rc geninfo_unexecuted_blocks=1 00:07:24.543 00:07:24.543 ' 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:24.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.543 --rc genhtml_branch_coverage=1 00:07:24.543 --rc genhtml_function_coverage=1 00:07:24.543 --rc genhtml_legend=1 00:07:24.543 --rc geninfo_all_blocks=1 00:07:24.543 --rc geninfo_unexecuted_blocks=1 00:07:24.543 00:07:24.543 ' 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:24.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.543 --rc genhtml_branch_coverage=1 00:07:24.543 --rc genhtml_function_coverage=1 00:07:24.543 --rc genhtml_legend=1 00:07:24.543 --rc geninfo_all_blocks=1 00:07:24.543 --rc geninfo_unexecuted_blocks=1 00:07:24.543 00:07:24.543 ' 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:24.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.543 --rc genhtml_branch_coverage=1 00:07:24.543 --rc genhtml_function_coverage=1 00:07:24.543 --rc genhtml_legend=1 00:07:24.543 --rc geninfo_all_blocks=1 00:07:24.543 --rc geninfo_unexecuted_blocks=1 00:07:24.543 00:07:24.543 ' 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.543 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:24.801 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.801 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.801 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.801 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.801 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.801 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:24.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:24.802 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:31.371 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:31.371 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:31.372 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:31.372 Found net devices under 0000:af:00.0: cvl_0_0 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:31.372 Found net devices under 0000:af:00.1: cvl_0_1 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:31.372 14:09:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.372 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.372 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.372 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:31.372 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:31.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:07:31.372 00:07:31.372 --- 10.0.0.2 ping statistics --- 00:07:31.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.372 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:07:31.372 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:07:31.372 00:07:31.372 --- 10.0.0.1 ping statistics --- 00:07:31.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.372 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:07:31.372 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.372 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:31.372 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:31.372 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.372 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:31.372 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:31.372 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.372 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:31.372 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:31.632 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:31.632 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:31.632 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:31.632 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:31.632 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:31.632 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.632 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1477046 00:07:31.632 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1477046 00:07:31.632 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:31.632 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1477046 ']' 00:07:31.632 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.632 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.632 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.632 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.632 14:09:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.632 [2024-12-10 14:09:32.191350] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:07:31.632 [2024-12-10 14:09:32.191396] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.632 [2024-12-10 14:09:32.277568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.632 [2024-12-10 14:09:32.317350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.632 [2024-12-10 14:09:32.317386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.632 [2024-12-10 14:09:32.317395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.632 [2024-12-10 14:09:32.317402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.632 [2024-12-10 14:09:32.317407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.632 [2024-12-10 14:09:32.319003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.632 [2024-12-10 14:09:32.319112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.632 [2024-12-10 14:09:32.319223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.632 [2024-12-10 14:09:32.319237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:32.569 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.569 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:32.569 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:32.569 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:32.569 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.569 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.569 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:32.569 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.569 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.569 [2024-12-10 14:09:33.066525] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.569 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.569 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:32.569 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.569 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.569 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:32.569 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:32.569 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:32.569 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.570 Malloc0 00:07:32.570 [2024-12-10 14:09:33.142927] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1477129 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1477129 /var/tmp/bdevperf.sock 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1477129 ']' 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:32.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:32.570 { 00:07:32.570 "params": { 00:07:32.570 "name": "Nvme$subsystem", 00:07:32.570 "trtype": "$TEST_TRANSPORT", 00:07:32.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:32.570 "adrfam": "ipv4", 00:07:32.570 "trsvcid": "$NVMF_PORT", 00:07:32.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:32.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:32.570 "hdgst": ${hdgst:-false}, 00:07:32.570 "ddgst": ${ddgst:-false} 00:07:32.570 }, 00:07:32.570 "method": "bdev_nvme_attach_controller" 00:07:32.570 } 00:07:32.570 EOF 00:07:32.570 )") 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:32.570 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:32.570 "params": { 00:07:32.570 "name": "Nvme0", 00:07:32.570 "trtype": "tcp", 00:07:32.570 "traddr": "10.0.0.2", 00:07:32.570 "adrfam": "ipv4", 00:07:32.570 "trsvcid": "4420", 00:07:32.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:32.570 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:32.570 "hdgst": false, 00:07:32.570 "ddgst": false 00:07:32.570 }, 00:07:32.570 "method": "bdev_nvme_attach_controller" 00:07:32.570 }' 00:07:32.570 [2024-12-10 14:09:33.240656] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:07:32.570 [2024-12-10 14:09:33.240700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477129 ] 00:07:32.829 [2024-12-10 14:09:33.321715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.829 [2024-12-10 14:09:33.361684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.829 Running I/O for 10 seconds... 00:07:32.829 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.829 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:32.829 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:32.829 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.829 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.088 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.088 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:33.088 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:33.088 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:33.088 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:33.088 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:33.088 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:33.088 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:33.088 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:33.088 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:33.088 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:33.088 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.088 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.088 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.088 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=95 00:07:33.088 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 95 -ge 100 ']' 00:07:33.088 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:33.350 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:33.350 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:33.350 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:33.350 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:33.350 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.350 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.350 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.350 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:07:33.350 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:07:33.350 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:33.350 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:33.350 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:33.350 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:33.350 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.350 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.350 [2024-12-10 14:09:33.921658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.921997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.350 [2024-12-10 14:09:33.922104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.351 [2024-12-10 14:09:33.922111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22710 is same with the state(6) to be set 00:07:33.351 [2024-12-10 14:09:33.922182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.351 [2024-12-10 14:09:33.922768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.351 [2024-12-10 14:09:33.922776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.922782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.922793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.922800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.922808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.922815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.922822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.922828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.922836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.922843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.922851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.922858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.922866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.922872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.922880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.922886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.922894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.922900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.922908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.922915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.922923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.922930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.922939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.922945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.922953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.922961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.922969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.922977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.922985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.922991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.922999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.923006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.923014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.923021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.923029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.923036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.923043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.923049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.923057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.923066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.923074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.923080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.923088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.923095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.923103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.923109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.923117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.923124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.923132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.923139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.923147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.923153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.923165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.923172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.923181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.352 [2024-12-10 14:09:33.923188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.352 [2024-12-10 14:09:33.923196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cac550 is same with the state(6) to be set 00:07:33.352 [2024-12-10 14:09:33.924149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:33.352 task offset: 98304 on job bdev=Nvme0n1 fails 00:07:33.352 00:07:33.352 Latency(us) 00:07:33.352 [2024-12-10T13:09:34.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.352 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:33.352 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:33.352 Verification LBA range: start 0x0 length 0x400 00:07:33.352 Nvme0n1 : 0.40 1908.58 119.29 159.05 0.00 30130.25 3698.10 26339.23 00:07:33.352 [2024-12-10T13:09:34.092Z] =================================================================================================================== 00:07:33.352 [2024-12-10T13:09:34.092Z] Total : 1908.58 119.29 159.05 0.00 30130.25 3698.10 26339.23 00:07:33.352 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.352 [2024-12-10 14:09:33.926584] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.352 [2024-12-10 14:09:33.926608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c98aa0 (9): Bad file descriptor 00:07:33.352 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:33.352 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.352 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.352 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.352 14:09:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:33.352 [2024-12-10 14:09:34.028397] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:34.290 14:09:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1477129 00:07:34.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1477129) - No such process 00:07:34.290 14:09:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:34.290 14:09:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:34.290 14:09:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:34.290 14:09:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:34.290 14:09:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:34.290 14:09:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:34.290 14:09:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:34.290 14:09:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:34.290 { 00:07:34.290 "params": { 00:07:34.290 "name": "Nvme$subsystem", 00:07:34.290 "trtype": "$TEST_TRANSPORT", 00:07:34.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:34.290 "adrfam": "ipv4", 00:07:34.290 "trsvcid": "$NVMF_PORT", 00:07:34.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:34.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:34.290 "hdgst": ${hdgst:-false}, 00:07:34.290 "ddgst": ${ddgst:-false} 00:07:34.290 }, 00:07:34.290 "method": "bdev_nvme_attach_controller" 00:07:34.290 } 00:07:34.290 EOF 00:07:34.290 )") 00:07:34.291 14:09:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:34.291 14:09:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:34.291 14:09:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:34.291 14:09:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:34.291 "params": { 00:07:34.291 "name": "Nvme0", 00:07:34.291 "trtype": "tcp", 00:07:34.291 "traddr": "10.0.0.2", 00:07:34.291 "adrfam": "ipv4", 00:07:34.291 "trsvcid": "4420", 00:07:34.291 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:34.291 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:34.291 "hdgst": false, 00:07:34.291 "ddgst": false 00:07:34.291 }, 00:07:34.291 "method": "bdev_nvme_attach_controller" 00:07:34.291 }' 00:07:34.291 [2024-12-10 14:09:34.992450] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:07:34.291 [2024-12-10 14:09:34.992498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477589 ] 00:07:34.552 [2024-12-10 14:09:35.071721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.552 [2024-12-10 14:09:35.109817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.812 Running I/O for 1 seconds... 00:07:35.750 2048.00 IOPS, 128.00 MiB/s 00:07:35.750 Latency(us) 00:07:35.750 [2024-12-10T13:09:36.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.750 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:35.750 Verification LBA range: start 0x0 length 0x400 00:07:35.750 Nvme0n1 : 1.02 2064.99 129.06 0.00 0.00 30510.57 7208.96 27337.87 00:07:35.750 [2024-12-10T13:09:36.490Z] =================================================================================================================== 00:07:35.750 [2024-12-10T13:09:36.490Z] Total : 2064.99 129.06 0.00 0.00 30510.57 7208.96 27337.87 00:07:36.009 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:36.009 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:36.010 rmmod nvme_tcp 00:07:36.010 rmmod nvme_fabrics 00:07:36.010 rmmod nvme_keyring 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1477046 ']' 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1477046 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1477046 ']' 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1477046 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1477046 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1477046' 00:07:36.010 killing process with pid 1477046 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1477046 00:07:36.010 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1477046 00:07:36.269 [2024-12-10 14:09:36.782381] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:36.269 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:36.269 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:36.269 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:36.269 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:36.269 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:36.269 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:36.269 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:36.269 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:36.269 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:36.269 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.269 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.269 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.174 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:38.174 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:38.174 00:07:38.174 real 0m13.798s 00:07:38.174 user 0m22.219s 00:07:38.174 sys 0m6.261s 00:07:38.174 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.174 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.174 ************************************ 00:07:38.174 END TEST nvmf_host_management 00:07:38.174 ************************************ 00:07:38.434 14:09:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:38.434 14:09:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:38.434 14:09:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.434 14:09:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:38.434 ************************************ 00:07:38.434 START TEST nvmf_lvol 00:07:38.434 ************************************ 00:07:38.434 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:38.434 * Looking for test storage... 00:07:38.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:38.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.434 --rc genhtml_branch_coverage=1 00:07:38.434 --rc genhtml_function_coverage=1 00:07:38.434 --rc genhtml_legend=1 00:07:38.434 --rc geninfo_all_blocks=1 00:07:38.434 --rc geninfo_unexecuted_blocks=1 00:07:38.434 00:07:38.434 ' 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:38.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.434 --rc genhtml_branch_coverage=1 00:07:38.434 --rc genhtml_function_coverage=1 00:07:38.434 --rc genhtml_legend=1 00:07:38.434 --rc geninfo_all_blocks=1 00:07:38.434 --rc geninfo_unexecuted_blocks=1 00:07:38.434 00:07:38.434 ' 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:38.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.434 --rc genhtml_branch_coverage=1 00:07:38.434 --rc genhtml_function_coverage=1 00:07:38.434 --rc genhtml_legend=1 00:07:38.434 --rc geninfo_all_blocks=1 00:07:38.434 --rc geninfo_unexecuted_blocks=1 00:07:38.434 00:07:38.434 ' 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:38.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.434 --rc genhtml_branch_coverage=1 00:07:38.434 --rc genhtml_function_coverage=1 00:07:38.434 --rc genhtml_legend=1 00:07:38.434 --rc geninfo_all_blocks=1 00:07:38.434 --rc geninfo_unexecuted_blocks=1 00:07:38.434 00:07:38.434 ' 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.434 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.435 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.694 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:38.694 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:38.694 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:38.694 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:45.263 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.263 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:45.264 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:45.264 Found net devices under 0000:af:00.0: cvl_0_0 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:45.264 Found net devices under 0000:af:00.1: cvl_0_1 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:45.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:07:45.264 00:07:45.264 --- 10.0.0.2 ping statistics --- 00:07:45.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.264 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:07:45.264 00:07:45.264 --- 10.0.0.1 ping statistics --- 00:07:45.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.264 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:45.264 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:45.524 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:45.524 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:45.524 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:45.524 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:45.524 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1481805 00:07:45.524 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:45.524 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1481805 00:07:45.524 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1481805 ']' 00:07:45.524 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.524 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.524 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.524 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.524 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:45.524 [2024-12-10 14:09:46.063240] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:07:45.524 [2024-12-10 14:09:46.063282] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.524 [2024-12-10 14:09:46.145214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.524 [2024-12-10 14:09:46.185286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.524 [2024-12-10 14:09:46.185320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.524 [2024-12-10 14:09:46.185327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.524 [2024-12-10 14:09:46.185333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.524 [2024-12-10 14:09:46.185339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.524 [2024-12-10 14:09:46.186675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.524 [2024-12-10 14:09:46.186785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.524 [2024-12-10 14:09:46.186786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.782 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.782 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:45.782 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:45.782 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:45.782 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:45.782 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.782 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:45.782 [2024-12-10 14:09:46.488608] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.782 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:46.041 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:46.041 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:46.299 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:46.299 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:46.557 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:46.815 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b3ac1ca6-4f90-4431-a1c3-36edf8175c91 00:07:46.815 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b3ac1ca6-4f90-4431-a1c3-36edf8175c91 lvol 20 00:07:47.073 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=61452eaa-0c0e-4876-9015-01e5ecc57475 00:07:47.073 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:47.073 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 61452eaa-0c0e-4876-9015-01e5ecc57475 00:07:47.331 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:47.589 [2024-12-10 14:09:48.139280] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.589 14:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:47.847 14:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1482109 00:07:47.847 14:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:47.847 14:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:48.783 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 61452eaa-0c0e-4876-9015-01e5ecc57475 MY_SNAPSHOT 00:07:49.041 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1d5a7695-fb6c-4d25-a94a-ff33d3e32812 00:07:49.041 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 61452eaa-0c0e-4876-9015-01e5ecc57475 30 00:07:49.300 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1d5a7695-fb6c-4d25-a94a-ff33d3e32812 MY_CLONE 00:07:49.558 14:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=23fd5f54-31a2-4bf2-bef0-4eabb2db65d2 00:07:49.558 14:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 23fd5f54-31a2-4bf2-bef0-4eabb2db65d2 00:07:50.124 14:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1482109 00:07:58.237 Initializing NVMe Controllers 00:07:58.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:58.237 Controller IO queue size 128, less than required. 00:07:58.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:58.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:58.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:58.237 Initialization complete. Launching workers. 00:07:58.237 ======================================================== 00:07:58.237 Latency(us) 00:07:58.237 Device Information : IOPS MiB/s Average min max 00:07:58.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12370.00 48.32 10347.20 1534.67 53108.98 00:07:58.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12226.80 47.76 10470.32 3225.40 59723.96 00:07:58.237 ======================================================== 00:07:58.237 Total : 24596.80 96.08 10408.41 1534.67 59723.96 00:07:58.237 00:07:58.237 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:58.495 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 61452eaa-0c0e-4876-9015-01e5ecc57475 00:07:58.754 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b3ac1ca6-4f90-4431-a1c3-36edf8175c91 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.013 rmmod nvme_tcp 00:07:59.013 rmmod nvme_fabrics 00:07:59.013 rmmod nvme_keyring 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1481805 ']' 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1481805 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1481805 ']' 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1481805 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1481805 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1481805' 00:07:59.013 killing process with pid 1481805 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1481805 00:07:59.013 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1481805 00:07:59.272 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:59.272 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:59.272 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:59.272 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:59.272 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:59.272 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:59.272 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:59.272 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.272 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:59.272 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.272 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.272 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.806 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:01.806 00:08:01.806 real 0m22.996s 00:08:01.806 user 1m4.064s 00:08:01.806 sys 0m8.275s 00:08:01.806 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.806 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:01.806 ************************************ 00:08:01.806 END TEST nvmf_lvol 00:08:01.806 ************************************ 00:08:01.806 14:10:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:01.806 14:10:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:01.806 14:10:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.806 14:10:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:01.806 ************************************ 00:08:01.806 START TEST nvmf_lvs_grow 00:08:01.806 ************************************ 00:08:01.806 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:01.806 * Looking for test storage... 00:08:01.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.806 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:01.806 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:01.806 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:01.806 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:01.806 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.806 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.806 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.806 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.806 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.806 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.806 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.806 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.806 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:01.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.807 --rc genhtml_branch_coverage=1 00:08:01.807 --rc genhtml_function_coverage=1 00:08:01.807 --rc genhtml_legend=1 00:08:01.807 --rc geninfo_all_blocks=1 00:08:01.807 --rc geninfo_unexecuted_blocks=1 00:08:01.807 00:08:01.807 ' 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:01.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.807 --rc genhtml_branch_coverage=1 00:08:01.807 --rc genhtml_function_coverage=1 00:08:01.807 --rc genhtml_legend=1 00:08:01.807 --rc geninfo_all_blocks=1 00:08:01.807 --rc geninfo_unexecuted_blocks=1 00:08:01.807 00:08:01.807 ' 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:01.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.807 --rc genhtml_branch_coverage=1 00:08:01.807 --rc genhtml_function_coverage=1 00:08:01.807 --rc genhtml_legend=1 00:08:01.807 --rc geninfo_all_blocks=1 00:08:01.807 --rc geninfo_unexecuted_blocks=1 00:08:01.807 00:08:01.807 ' 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:01.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.807 --rc genhtml_branch_coverage=1 00:08:01.807 --rc genhtml_function_coverage=1 00:08:01.807 --rc genhtml_legend=1 00:08:01.807 --rc geninfo_all_blocks=1 00:08:01.807 --rc geninfo_unexecuted_blocks=1 00:08:01.807 00:08:01.807 ' 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:01.807 14:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:08.375 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:08.375 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:08.375 Found net devices under 0000:af:00.0: cvl_0_0 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:08.375 Found net devices under 0000:af:00.1: cvl_0_1 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.375 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:08.376 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:08.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:08:08.376 00:08:08.376 --- 10.0.0.2 ping statistics --- 00:08:08.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.376 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:08:08.376 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:08:08.376 00:08:08.376 --- 10.0.0.1 ping statistics --- 00:08:08.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.376 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:08.376 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.376 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:08.376 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:08.376 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.376 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:08.376 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:08.376 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.376 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:08.376 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:08.376 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:08.376 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:08.376 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:08.376 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.376 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1487949 00:08:08.376 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1487949 00:08:08.376 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:08.376 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1487949 ']' 00:08:08.376 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.376 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.376 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.376 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.376 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.376 [2024-12-10 14:10:09.056828] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:08:08.376 [2024-12-10 14:10:09.056872] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.635 [2024-12-10 14:10:09.139702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.635 [2024-12-10 14:10:09.177433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.635 [2024-12-10 14:10:09.177469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.635 [2024-12-10 14:10:09.177480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.635 [2024-12-10 14:10:09.177485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.635 [2024-12-10 14:10:09.177490] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.635 [2024-12-10 14:10:09.178039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.635 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.635 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:08.635 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:08.635 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:08.635 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.635 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.635 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:08.893 [2024-12-10 14:10:09.490066] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.893 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:08.893 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.893 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.893 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.893 ************************************ 00:08:08.893 START TEST lvs_grow_clean 00:08:08.893 ************************************ 00:08:08.893 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:08.893 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:08.893 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:08.893 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:08.893 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:08.893 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:08.893 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:08.893 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:08.893 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:08.893 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:09.152 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:09.152 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:09.410 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=45729e41-3db6-417d-bd47-78d9348fe810 00:08:09.410 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45729e41-3db6-417d-bd47-78d9348fe810 00:08:09.410 14:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:09.669 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:09.669 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:09.669 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 45729e41-3db6-417d-bd47-78d9348fe810 lvol 150 00:08:09.669 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6b20f668-1809-46c4-bf6e-481f51af0998 00:08:09.669 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:09.669 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:09.927 [2024-12-10 14:10:10.529192] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:09.927 [2024-12-10 14:10:10.529255] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:09.927 true 00:08:09.927 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:09.927 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45729e41-3db6-417d-bd47-78d9348fe810 00:08:10.185 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:10.186 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:10.186 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6b20f668-1809-46c4-bf6e-481f51af0998 00:08:10.444 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:10.703 [2024-12-10 14:10:11.251374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.703 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:10.961 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1488445 00:08:10.961 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:10.961 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:10.962 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1488445 /var/tmp/bdevperf.sock 00:08:10.962 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1488445 ']' 00:08:10.962 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:10.962 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.962 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:10.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:10.962 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.962 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:10.962 [2024-12-10 14:10:11.491952] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:08:10.962 [2024-12-10 14:10:11.491998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488445 ] 00:08:10.962 [2024-12-10 14:10:11.573333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.962 [2024-12-10 14:10:11.613540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.897 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.897 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:11.897 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:11.897 Nvme0n1 00:08:11.897 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:12.156 [ 00:08:12.156 { 00:08:12.156 "name": "Nvme0n1", 00:08:12.156 "aliases": [ 00:08:12.156 "6b20f668-1809-46c4-bf6e-481f51af0998" 00:08:12.156 ], 00:08:12.156 "product_name": "NVMe disk", 00:08:12.156 "block_size": 4096, 00:08:12.156 "num_blocks": 38912, 00:08:12.156 "uuid": "6b20f668-1809-46c4-bf6e-481f51af0998", 00:08:12.156 "numa_id": 1, 00:08:12.156 "assigned_rate_limits": { 00:08:12.156 "rw_ios_per_sec": 0, 00:08:12.156 "rw_mbytes_per_sec": 0, 00:08:12.156 "r_mbytes_per_sec": 0, 00:08:12.156 "w_mbytes_per_sec": 0 00:08:12.156 }, 00:08:12.156 "claimed": false, 00:08:12.156 "zoned": false, 00:08:12.156 "supported_io_types": { 00:08:12.156 "read": true, 00:08:12.156 "write": true, 00:08:12.156 "unmap": true, 00:08:12.156 "flush": true, 00:08:12.156 "reset": true, 00:08:12.156 "nvme_admin": true, 00:08:12.156 "nvme_io": true, 00:08:12.156 "nvme_io_md": false, 00:08:12.156 "write_zeroes": true, 00:08:12.156 "zcopy": false, 00:08:12.156 "get_zone_info": false, 00:08:12.156 "zone_management": false, 00:08:12.156 "zone_append": false, 00:08:12.156 "compare": true, 00:08:12.156 "compare_and_write": true, 00:08:12.156 "abort": true, 00:08:12.156 "seek_hole": false, 00:08:12.156 "seek_data": false, 00:08:12.156 "copy": true, 00:08:12.156 "nvme_iov_md": false 00:08:12.156 }, 00:08:12.156 "memory_domains": [ 00:08:12.156 { 00:08:12.156 "dma_device_id": "system", 00:08:12.156 "dma_device_type": 1 00:08:12.156 } 00:08:12.156 ], 00:08:12.156 "driver_specific": { 00:08:12.156 "nvme": [ 00:08:12.156 { 00:08:12.156 "trid": { 00:08:12.156 "trtype": "TCP", 00:08:12.156 "adrfam": "IPv4", 00:08:12.156 "traddr": "10.0.0.2", 00:08:12.156 "trsvcid": "4420", 00:08:12.156 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:12.156 }, 00:08:12.156 "ctrlr_data": { 00:08:12.156 "cntlid": 1, 00:08:12.156 "vendor_id": "0x8086", 00:08:12.156 "model_number": "SPDK bdev Controller", 00:08:12.156 "serial_number": "SPDK0", 00:08:12.156 "firmware_revision": "25.01", 00:08:12.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:12.156 "oacs": { 00:08:12.156 "security": 0, 00:08:12.156 "format": 0, 00:08:12.156 "firmware": 0, 00:08:12.156 "ns_manage": 0 00:08:12.156 }, 00:08:12.156 "multi_ctrlr": true, 00:08:12.156 "ana_reporting": false 00:08:12.156 }, 00:08:12.156 "vs": { 00:08:12.156 "nvme_version": "1.3" 00:08:12.156 }, 00:08:12.156 "ns_data": { 00:08:12.156 "id": 1, 00:08:12.156 "can_share": true 00:08:12.156 } 00:08:12.156 } 00:08:12.156 ], 00:08:12.156 "mp_policy": "active_passive" 00:08:12.156 } 00:08:12.156 } 00:08:12.156 ] 00:08:12.156 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1488676 00:08:12.156 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:12.156 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:12.156 Running I/O for 10 seconds... 00:08:13.531 Latency(us) 00:08:13.531 [2024-12-10T13:10:14.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.531 Nvme0n1 : 1.00 23584.00 92.12 0.00 0.00 0.00 0.00 0.00 00:08:13.531 [2024-12-10T13:10:14.271Z] =================================================================================================================== 00:08:13.531 [2024-12-10T13:10:14.271Z] Total : 23584.00 92.12 0.00 0.00 0.00 0.00 0.00 00:08:13.531 00:08:14.098 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 45729e41-3db6-417d-bd47-78d9348fe810 00:08:14.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.357 Nvme0n1 : 2.00 23671.00 92.46 0.00 0.00 0.00 0.00 0.00 00:08:14.357 [2024-12-10T13:10:15.097Z] =================================================================================================================== 00:08:14.357 [2024-12-10T13:10:15.097Z] Total : 23671.00 92.46 0.00 0.00 0.00 0.00 0.00 00:08:14.357 00:08:14.357 true 00:08:14.357 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45729e41-3db6-417d-bd47-78d9348fe810 00:08:14.357 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:14.614 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:14.614 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:14.614 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1488676 00:08:15.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.180 Nvme0n1 : 3.00 23703.67 92.59 0.00 0.00 0.00 0.00 0.00 00:08:15.180 [2024-12-10T13:10:15.920Z] =================================================================================================================== 00:08:15.180 [2024-12-10T13:10:15.920Z] Total : 23703.67 92.59 0.00 0.00 0.00 0.00 0.00 00:08:15.180 00:08:16.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.556 Nvme0n1 : 4.00 23780.75 92.89 0.00 0.00 0.00 0.00 0.00 00:08:16.556 [2024-12-10T13:10:17.296Z] =================================================================================================================== 00:08:16.556 [2024-12-10T13:10:17.296Z] Total : 23780.75 92.89 0.00 0.00 0.00 0.00 0.00 00:08:16.556 00:08:17.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.490 Nvme0n1 : 5.00 23820.40 93.05 0.00 0.00 0.00 0.00 0.00 00:08:17.490 [2024-12-10T13:10:18.230Z] =================================================================================================================== 00:08:17.490 [2024-12-10T13:10:18.230Z] Total : 23820.40 93.05 0.00 0.00 0.00 0.00 0.00 00:08:17.490 00:08:18.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.426 Nvme0n1 : 6.00 23873.50 93.26 0.00 0.00 0.00 0.00 0.00 00:08:18.426 [2024-12-10T13:10:19.166Z] =================================================================================================================== 00:08:18.426 [2024-12-10T13:10:19.166Z] Total : 23873.50 93.26 0.00 0.00 0.00 0.00 0.00 00:08:18.426 00:08:19.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.360 Nvme0n1 : 7.00 23889.43 93.32 0.00 0.00 0.00 0.00 0.00 00:08:19.360 [2024-12-10T13:10:20.100Z] =================================================================================================================== 00:08:19.360 [2024-12-10T13:10:20.100Z] Total : 23889.43 93.32 0.00 0.00 0.00 0.00 0.00 00:08:19.361 00:08:20.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.296 Nvme0n1 : 8.00 23904.25 93.38 0.00 0.00 0.00 0.00 0.00 00:08:20.296 [2024-12-10T13:10:21.036Z] =================================================================================================================== 00:08:20.296 [2024-12-10T13:10:21.036Z] Total : 23904.25 93.38 0.00 0.00 0.00 0.00 0.00 00:08:20.296 00:08:21.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.231 Nvme0n1 : 9.00 23917.78 93.43 0.00 0.00 0.00 0.00 0.00 00:08:21.231 [2024-12-10T13:10:21.971Z] =================================================================================================================== 00:08:21.231 [2024-12-10T13:10:21.971Z] Total : 23917.78 93.43 0.00 0.00 0.00 0.00 0.00 00:08:21.231 00:08:22.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.167 Nvme0n1 : 10.00 23937.40 93.51 0.00 0.00 0.00 0.00 0.00 00:08:22.167 [2024-12-10T13:10:22.907Z] =================================================================================================================== 00:08:22.167 [2024-12-10T13:10:22.907Z] Total : 23937.40 93.51 0.00 0.00 0.00 0.00 0.00 00:08:22.167 00:08:22.167 00:08:22.167 Latency(us) 00:08:22.167 [2024-12-10T13:10:22.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.167 Nvme0n1 : 10.00 23936.02 93.50 0.00 0.00 5344.55 3151.97 10423.34 00:08:22.167 [2024-12-10T13:10:22.907Z] =================================================================================================================== 00:08:22.167 [2024-12-10T13:10:22.907Z] Total : 23936.02 93.50 0.00 0.00 5344.55 3151.97 10423.34 00:08:22.167 { 00:08:22.167 "results": [ 00:08:22.167 { 00:08:22.167 "job": "Nvme0n1", 00:08:22.167 "core_mask": "0x2", 00:08:22.167 "workload": "randwrite", 00:08:22.167 "status": "finished", 00:08:22.167 "queue_depth": 128, 00:08:22.167 "io_size": 4096, 00:08:22.167 "runtime": 10.003249, 00:08:22.167 "iops": 23936.023186066846, 00:08:22.167 "mibps": 93.50009057057362, 00:08:22.167 "io_failed": 0, 00:08:22.167 "io_timeout": 0, 00:08:22.167 "avg_latency_us": 5344.554911099364, 00:08:22.167 "min_latency_us": 3151.9695238095237, 00:08:22.167 "max_latency_us": 10423.344761904762 00:08:22.167 } 00:08:22.167 ], 00:08:22.167 "core_count": 1 00:08:22.167 } 00:08:22.425 14:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1488445 00:08:22.425 14:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1488445 ']' 00:08:22.425 14:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1488445 00:08:22.425 14:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:22.425 14:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.425 14:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1488445 00:08:22.425 14:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:22.425 14:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:22.425 14:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1488445' 00:08:22.425 killing process with pid 1488445 00:08:22.425 14:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1488445 00:08:22.425 Received shutdown signal, test time was about 10.000000 seconds 00:08:22.425 00:08:22.425 Latency(us) 00:08:22.425 [2024-12-10T13:10:23.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.425 [2024-12-10T13:10:23.165Z] =================================================================================================================== 00:08:22.425 [2024-12-10T13:10:23.165Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:22.425 14:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1488445 00:08:22.426 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:22.684 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:22.942 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45729e41-3db6-417d-bd47-78d9348fe810 00:08:22.942 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:23.201 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:23.201 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:23.201 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:23.201 [2024-12-10 14:10:23.846845] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:23.201 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45729e41-3db6-417d-bd47-78d9348fe810 00:08:23.201 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:23.201 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45729e41-3db6-417d-bd47-78d9348fe810 00:08:23.201 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.201 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.201 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.201 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.201 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.201 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.201 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.201 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:23.201 14:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45729e41-3db6-417d-bd47-78d9348fe810 00:08:23.460 request: 00:08:23.460 { 00:08:23.460 "uuid": "45729e41-3db6-417d-bd47-78d9348fe810", 00:08:23.460 "method": "bdev_lvol_get_lvstores", 00:08:23.460 "req_id": 1 00:08:23.460 } 00:08:23.460 Got JSON-RPC error response 00:08:23.460 response: 00:08:23.460 { 00:08:23.460 "code": -19, 00:08:23.460 "message": "No such device" 00:08:23.460 } 00:08:23.460 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:23.460 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:23.460 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:23.460 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:23.460 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:23.719 aio_bdev 00:08:23.719 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6b20f668-1809-46c4-bf6e-481f51af0998 00:08:23.719 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=6b20f668-1809-46c4-bf6e-481f51af0998 00:08:23.719 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.719 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:23.719 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.719 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.719 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:23.719 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6b20f668-1809-46c4-bf6e-481f51af0998 -t 2000 00:08:23.977 [ 00:08:23.977 { 00:08:23.977 "name": "6b20f668-1809-46c4-bf6e-481f51af0998", 00:08:23.977 "aliases": [ 00:08:23.977 "lvs/lvol" 00:08:23.977 ], 00:08:23.977 "product_name": "Logical Volume", 00:08:23.977 "block_size": 4096, 00:08:23.977 "num_blocks": 38912, 00:08:23.977 "uuid": "6b20f668-1809-46c4-bf6e-481f51af0998", 00:08:23.977 "assigned_rate_limits": { 00:08:23.977 "rw_ios_per_sec": 0, 00:08:23.977 "rw_mbytes_per_sec": 0, 00:08:23.977 "r_mbytes_per_sec": 0, 00:08:23.977 "w_mbytes_per_sec": 0 00:08:23.977 }, 00:08:23.977 "claimed": false, 00:08:23.977 "zoned": false, 00:08:23.977 "supported_io_types": { 00:08:23.977 "read": true, 00:08:23.977 "write": true, 00:08:23.977 "unmap": true, 00:08:23.977 "flush": false, 00:08:23.977 "reset": true, 00:08:23.977 "nvme_admin": false, 00:08:23.977 "nvme_io": false, 00:08:23.977 "nvme_io_md": false, 00:08:23.977 "write_zeroes": true, 00:08:23.977 "zcopy": false, 00:08:23.977 "get_zone_info": false, 00:08:23.977 "zone_management": false, 00:08:23.977 "zone_append": false, 00:08:23.977 "compare": false, 00:08:23.977 "compare_and_write": false, 00:08:23.977 "abort": false, 00:08:23.977 "seek_hole": true, 00:08:23.977 "seek_data": true, 00:08:23.977 "copy": false, 00:08:23.977 "nvme_iov_md": false 00:08:23.977 }, 00:08:23.977 "driver_specific": { 00:08:23.977 "lvol": { 00:08:23.977 "lvol_store_uuid": "45729e41-3db6-417d-bd47-78d9348fe810", 00:08:23.977 "base_bdev": "aio_bdev", 00:08:23.977 "thin_provision": false, 00:08:23.977 "num_allocated_clusters": 38, 00:08:23.977 "snapshot": false, 00:08:23.978 "clone": false, 00:08:23.978 "esnap_clone": false 00:08:23.978 } 00:08:23.978 } 00:08:23.978 } 00:08:23.978 ] 00:08:23.978 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:23.978 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45729e41-3db6-417d-bd47-78d9348fe810 00:08:23.978 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:24.235 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:24.235 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45729e41-3db6-417d-bd47-78d9348fe810 00:08:24.235 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:24.496 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:24.496 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6b20f668-1809-46c4-bf6e-481f51af0998 00:08:24.496 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 45729e41-3db6-417d-bd47-78d9348fe810 00:08:24.755 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:25.014 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.014 00:08:25.014 real 0m16.067s 00:08:25.014 user 0m15.836s 00:08:25.014 sys 0m1.450s 00:08:25.014 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.014 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 ************************************ 00:08:25.014 END TEST lvs_grow_clean 00:08:25.014 ************************************ 00:08:25.014 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:25.014 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:25.014 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.014 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.014 ************************************ 00:08:25.014 START TEST lvs_grow_dirty 00:08:25.014 ************************************ 00:08:25.014 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:25.014 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:25.014 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:25.014 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:25.014 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:25.014 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:25.014 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:25.014 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.014 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.014 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:25.273 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:25.273 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:25.531 14:10:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c970b322-8c70-4590-89aa-7fb1bfed5450 00:08:25.531 14:10:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c970b322-8c70-4590-89aa-7fb1bfed5450 00:08:25.531 14:10:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:25.790 14:10:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:25.790 14:10:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:25.790 14:10:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c970b322-8c70-4590-89aa-7fb1bfed5450 lvol 150 00:08:25.790 14:10:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a5310378-a60a-4808-a6f1-166e96cc421d 00:08:25.790 14:10:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.790 14:10:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:26.049 [2024-12-10 14:10:26.637094] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:26.049 [2024-12-10 14:10:26.637148] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:26.049 true 00:08:26.049 14:10:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c970b322-8c70-4590-89aa-7fb1bfed5450 00:08:26.049 14:10:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:26.307 14:10:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:26.307 14:10:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:26.307 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a5310378-a60a-4808-a6f1-166e96cc421d 00:08:26.566 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:26.824 [2024-12-10 14:10:27.355239] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.824 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:26.824 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1491228 00:08:26.824 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:26.824 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:26.824 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1491228 /var/tmp/bdevperf.sock 00:08:26.824 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1491228 ']' 00:08:26.825 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:26.825 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.825 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:26.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:26.825 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.825 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:27.083 [2024-12-10 14:10:27.586635] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:08:27.083 [2024-12-10 14:10:27.586680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491228 ] 00:08:27.083 [2024-12-10 14:10:27.663986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.083 [2024-12-10 14:10:27.702715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.083 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.083 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:27.083 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:27.342 Nvme0n1 00:08:27.342 14:10:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:27.601 [ 00:08:27.601 { 00:08:27.601 "name": "Nvme0n1", 00:08:27.601 "aliases": [ 00:08:27.601 "a5310378-a60a-4808-a6f1-166e96cc421d" 00:08:27.601 ], 00:08:27.601 "product_name": "NVMe disk", 00:08:27.601 "block_size": 4096, 00:08:27.601 "num_blocks": 38912, 00:08:27.601 "uuid": "a5310378-a60a-4808-a6f1-166e96cc421d", 00:08:27.601 "numa_id": 1, 00:08:27.601 "assigned_rate_limits": { 00:08:27.601 "rw_ios_per_sec": 0, 00:08:27.601 "rw_mbytes_per_sec": 0, 00:08:27.601 "r_mbytes_per_sec": 0, 00:08:27.601 "w_mbytes_per_sec": 0 00:08:27.601 }, 00:08:27.601 "claimed": false, 00:08:27.601 "zoned": false, 00:08:27.601 "supported_io_types": { 00:08:27.601 "read": true, 00:08:27.601 "write": true, 00:08:27.601 "unmap": true, 00:08:27.601 "flush": true, 00:08:27.601 "reset": true, 00:08:27.601 "nvme_admin": true, 00:08:27.601 "nvme_io": true, 00:08:27.601 "nvme_io_md": false, 00:08:27.601 "write_zeroes": true, 00:08:27.601 "zcopy": false, 00:08:27.601 "get_zone_info": false, 00:08:27.601 "zone_management": false, 00:08:27.601 "zone_append": false, 00:08:27.601 "compare": true, 00:08:27.601 "compare_and_write": true, 00:08:27.601 "abort": true, 00:08:27.601 "seek_hole": false, 00:08:27.601 "seek_data": false, 00:08:27.601 "copy": true, 00:08:27.601 "nvme_iov_md": false 00:08:27.601 }, 00:08:27.601 "memory_domains": [ 00:08:27.601 { 00:08:27.601 "dma_device_id": "system", 00:08:27.601 "dma_device_type": 1 00:08:27.601 } 00:08:27.601 ], 00:08:27.601 "driver_specific": { 00:08:27.601 "nvme": [ 00:08:27.601 { 00:08:27.601 "trid": { 00:08:27.601 "trtype": "TCP", 00:08:27.601 "adrfam": "IPv4", 00:08:27.601 "traddr": "10.0.0.2", 00:08:27.601 "trsvcid": "4420", 00:08:27.601 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:27.601 }, 00:08:27.601 "ctrlr_data": { 00:08:27.601 "cntlid": 1, 00:08:27.601 "vendor_id": "0x8086", 00:08:27.601 "model_number": "SPDK bdev Controller", 00:08:27.601 "serial_number": "SPDK0", 00:08:27.601 "firmware_revision": "25.01", 00:08:27.601 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:27.601 "oacs": { 00:08:27.601 "security": 0, 00:08:27.601 "format": 0, 00:08:27.601 "firmware": 0, 00:08:27.601 "ns_manage": 0 00:08:27.601 }, 00:08:27.601 "multi_ctrlr": true, 00:08:27.601 "ana_reporting": false 00:08:27.601 }, 00:08:27.601 "vs": { 00:08:27.601 "nvme_version": "1.3" 00:08:27.601 }, 00:08:27.601 "ns_data": { 00:08:27.601 "id": 1, 00:08:27.601 "can_share": true 00:08:27.601 } 00:08:27.601 } 00:08:27.601 ], 00:08:27.601 "mp_policy": "active_passive" 00:08:27.601 } 00:08:27.601 } 00:08:27.601 ] 00:08:27.601 14:10:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1491243 00:08:27.601 14:10:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:27.601 14:10:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:27.873 Running I/O for 10 seconds... 00:08:28.947 Latency(us) 00:08:28.947 [2024-12-10T13:10:29.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.947 Nvme0n1 : 1.00 23372.00 91.30 0.00 0.00 0.00 0.00 0.00 00:08:28.947 [2024-12-10T13:10:29.687Z] =================================================================================================================== 00:08:28.947 [2024-12-10T13:10:29.687Z] Total : 23372.00 91.30 0.00 0.00 0.00 0.00 0.00 00:08:28.947 00:08:29.883 14:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c970b322-8c70-4590-89aa-7fb1bfed5450 00:08:29.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.883 Nvme0n1 : 2.00 23467.50 91.67 0.00 0.00 0.00 0.00 0.00 00:08:29.883 [2024-12-10T13:10:30.623Z] =================================================================================================================== 00:08:29.883 [2024-12-10T13:10:30.623Z] Total : 23467.50 91.67 0.00 0.00 0.00 0.00 0.00 00:08:29.883 00:08:29.883 true 00:08:29.883 14:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c970b322-8c70-4590-89aa-7fb1bfed5450 00:08:29.883 14:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:30.142 14:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:30.142 14:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:30.142 14:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1491243 00:08:30.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.715 Nvme0n1 : 3.00 23467.33 91.67 0.00 0.00 0.00 0.00 0.00 00:08:30.715 [2024-12-10T13:10:31.455Z] =================================================================================================================== 00:08:30.715 [2024-12-10T13:10:31.455Z] Total : 23467.33 91.67 0.00 0.00 0.00 0.00 0.00 00:08:30.715 00:08:31.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.651 Nvme0n1 : 4.00 23601.25 92.19 0.00 0.00 0.00 0.00 0.00 00:08:31.651 [2024-12-10T13:10:32.391Z] =================================================================================================================== 00:08:31.651 [2024-12-10T13:10:32.391Z] Total : 23601.25 92.19 0.00 0.00 0.00 0.00 0.00 00:08:31.651 00:08:33.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.027 Nvme0n1 : 5.00 23673.00 92.47 0.00 0.00 0.00 0.00 0.00 00:08:33.027 [2024-12-10T13:10:33.767Z] =================================================================================================================== 00:08:33.027 [2024-12-10T13:10:33.767Z] Total : 23673.00 92.47 0.00 0.00 0.00 0.00 0.00 00:08:33.027 00:08:33.962 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.962 Nvme0n1 : 6.00 23734.83 92.71 0.00 0.00 0.00 0.00 0.00 00:08:33.962 [2024-12-10T13:10:34.702Z] =================================================================================================================== 00:08:33.962 [2024-12-10T13:10:34.702Z] Total : 23734.83 92.71 0.00 0.00 0.00 0.00 0.00 00:08:33.962 00:08:34.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.901 Nvme0n1 : 7.00 23774.14 92.87 0.00 0.00 0.00 0.00 0.00 00:08:34.901 [2024-12-10T13:10:35.641Z] =================================================================================================================== 00:08:34.901 [2024-12-10T13:10:35.641Z] Total : 23774.14 92.87 0.00 0.00 0.00 0.00 0.00 00:08:34.901 00:08:35.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.836 Nvme0n1 : 8.00 23815.00 93.03 0.00 0.00 0.00 0.00 0.00 00:08:35.836 [2024-12-10T13:10:36.576Z] =================================================================================================================== 00:08:35.836 [2024-12-10T13:10:36.576Z] Total : 23815.00 93.03 0.00 0.00 0.00 0.00 0.00 00:08:35.836 00:08:36.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.771 Nvme0n1 : 9.00 23830.89 93.09 0.00 0.00 0.00 0.00 0.00 00:08:36.771 [2024-12-10T13:10:37.511Z] =================================================================================================================== 00:08:36.771 [2024-12-10T13:10:37.511Z] Total : 23830.89 93.09 0.00 0.00 0.00 0.00 0.00 00:08:36.771 00:08:37.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.706 Nvme0n1 : 10.00 23850.40 93.17 0.00 0.00 0.00 0.00 0.00 00:08:37.706 [2024-12-10T13:10:38.446Z] =================================================================================================================== 00:08:37.706 [2024-12-10T13:10:38.446Z] Total : 23850.40 93.17 0.00 0.00 0.00 0.00 0.00 00:08:37.706 00:08:37.706 00:08:37.706 Latency(us) 00:08:37.706 [2024-12-10T13:10:38.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.706 Nvme0n1 : 10.00 23855.38 93.19 0.00 0.00 5362.79 3214.38 10673.01 00:08:37.706 [2024-12-10T13:10:38.446Z] =================================================================================================================== 00:08:37.706 [2024-12-10T13:10:38.446Z] Total : 23855.38 93.19 0.00 0.00 5362.79 3214.38 10673.01 00:08:37.706 { 00:08:37.706 "results": [ 00:08:37.706 { 00:08:37.706 "job": "Nvme0n1", 00:08:37.706 "core_mask": "0x2", 00:08:37.706 "workload": "randwrite", 00:08:37.706 "status": "finished", 00:08:37.706 "queue_depth": 128, 00:08:37.706 "io_size": 4096, 00:08:37.706 "runtime": 10.00328, 00:08:37.706 "iops": 23855.37543685671, 00:08:37.706 "mibps": 93.18506030022152, 00:08:37.706 "io_failed": 0, 00:08:37.706 "io_timeout": 0, 00:08:37.706 "avg_latency_us": 5362.794104426979, 00:08:37.706 "min_latency_us": 3214.384761904762, 00:08:37.706 "max_latency_us": 10673.005714285715 00:08:37.706 } 00:08:37.706 ], 00:08:37.706 "core_count": 1 00:08:37.706 } 00:08:37.706 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1491228 00:08:37.706 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1491228 ']' 00:08:37.706 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1491228 00:08:37.706 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:37.706 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.706 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1491228 00:08:37.965 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:37.965 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:37.965 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1491228' 00:08:37.965 killing process with pid 1491228 00:08:37.965 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1491228 00:08:37.965 Received shutdown signal, test time was about 10.000000 seconds 00:08:37.965 00:08:37.965 Latency(us) 00:08:37.965 [2024-12-10T13:10:38.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.965 [2024-12-10T13:10:38.705Z] =================================================================================================================== 00:08:37.965 [2024-12-10T13:10:38.705Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:37.965 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1491228 00:08:37.965 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.223 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:38.482 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c970b322-8c70-4590-89aa-7fb1bfed5450 00:08:38.482 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:38.482 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:38.482 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:38.482 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1487949 00:08:38.482 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1487949 00:08:38.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1487949 Killed "${NVMF_APP[@]}" "$@" 00:08:38.741 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:38.741 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:38.741 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:38.741 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.741 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:38.741 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1493084 00:08:38.741 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1493084 00:08:38.741 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:38.741 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1493084 ']' 00:08:38.741 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.741 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.741 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.741 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.741 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:38.741 [2024-12-10 14:10:39.313800] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:08:38.741 [2024-12-10 14:10:39.313848] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.741 [2024-12-10 14:10:39.397723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.741 [2024-12-10 14:10:39.436504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.741 [2024-12-10 14:10:39.436538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.741 [2024-12-10 14:10:39.436545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.741 [2024-12-10 14:10:39.436551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.741 [2024-12-10 14:10:39.436556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.741 [2024-12-10 14:10:39.437082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.000 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.000 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:39.000 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:39.000 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.000 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:39.000 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.000 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:39.000 [2024-12-10 14:10:39.739054] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:39.000 [2024-12-10 14:10:39.739140] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:39.000 [2024-12-10 14:10:39.739167] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:39.259 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:39.259 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a5310378-a60a-4808-a6f1-166e96cc421d 00:08:39.259 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a5310378-a60a-4808-a6f1-166e96cc421d 00:08:39.259 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.259 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:39.259 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.259 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.259 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:39.259 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a5310378-a60a-4808-a6f1-166e96cc421d -t 2000 00:08:39.517 [ 00:08:39.517 { 00:08:39.517 "name": "a5310378-a60a-4808-a6f1-166e96cc421d", 00:08:39.517 "aliases": [ 00:08:39.517 "lvs/lvol" 00:08:39.517 ], 00:08:39.517 "product_name": "Logical Volume", 00:08:39.517 "block_size": 4096, 00:08:39.517 "num_blocks": 38912, 00:08:39.517 "uuid": "a5310378-a60a-4808-a6f1-166e96cc421d", 00:08:39.517 "assigned_rate_limits": { 00:08:39.517 "rw_ios_per_sec": 0, 00:08:39.517 "rw_mbytes_per_sec": 0, 00:08:39.517 "r_mbytes_per_sec": 0, 00:08:39.517 "w_mbytes_per_sec": 0 00:08:39.517 }, 00:08:39.517 "claimed": false, 00:08:39.517 "zoned": false, 00:08:39.517 "supported_io_types": { 00:08:39.517 "read": true, 00:08:39.517 "write": true, 00:08:39.517 "unmap": true, 00:08:39.517 "flush": false, 00:08:39.517 "reset": true, 00:08:39.517 "nvme_admin": false, 00:08:39.517 "nvme_io": false, 00:08:39.517 "nvme_io_md": false, 00:08:39.517 "write_zeroes": true, 00:08:39.517 "zcopy": false, 00:08:39.517 "get_zone_info": false, 00:08:39.517 "zone_management": false, 00:08:39.517 "zone_append": false, 00:08:39.517 "compare": false, 00:08:39.517 "compare_and_write": false, 00:08:39.517 "abort": false, 00:08:39.517 "seek_hole": true, 00:08:39.517 "seek_data": true, 00:08:39.517 "copy": false, 00:08:39.517 "nvme_iov_md": false 00:08:39.517 }, 00:08:39.517 "driver_specific": { 00:08:39.517 "lvol": { 00:08:39.517 "lvol_store_uuid": "c970b322-8c70-4590-89aa-7fb1bfed5450", 00:08:39.517 "base_bdev": "aio_bdev", 00:08:39.517 "thin_provision": false, 00:08:39.517 "num_allocated_clusters": 38, 00:08:39.517 "snapshot": false, 00:08:39.517 "clone": false, 00:08:39.517 "esnap_clone": false 00:08:39.517 } 00:08:39.517 } 00:08:39.517 } 00:08:39.517 ] 00:08:39.517 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:39.517 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c970b322-8c70-4590-89aa-7fb1bfed5450 00:08:39.517 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:39.776 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:39.776 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c970b322-8c70-4590-89aa-7fb1bfed5450 00:08:39.776 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:39.776 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:39.776 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:40.034 [2024-12-10 14:10:40.679744] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:40.034 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c970b322-8c70-4590-89aa-7fb1bfed5450 00:08:40.034 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:40.034 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c970b322-8c70-4590-89aa-7fb1bfed5450 00:08:40.034 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.034 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.034 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.034 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.034 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.034 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.034 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.034 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:40.034 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c970b322-8c70-4590-89aa-7fb1bfed5450 00:08:40.292 request: 00:08:40.292 { 00:08:40.292 "uuid": "c970b322-8c70-4590-89aa-7fb1bfed5450", 00:08:40.292 "method": "bdev_lvol_get_lvstores", 00:08:40.292 "req_id": 1 00:08:40.292 } 00:08:40.292 Got JSON-RPC error response 00:08:40.292 response: 00:08:40.292 { 00:08:40.292 "code": -19, 00:08:40.292 "message": "No such device" 00:08:40.292 } 00:08:40.292 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:40.292 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:40.292 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:40.292 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:40.292 14:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.551 aio_bdev 00:08:40.551 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a5310378-a60a-4808-a6f1-166e96cc421d 00:08:40.551 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a5310378-a60a-4808-a6f1-166e96cc421d 00:08:40.551 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.551 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:40.551 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.551 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.551 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:40.810 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a5310378-a60a-4808-a6f1-166e96cc421d -t 2000 00:08:40.810 [ 00:08:40.810 { 00:08:40.810 "name": "a5310378-a60a-4808-a6f1-166e96cc421d", 00:08:40.810 "aliases": [ 00:08:40.810 "lvs/lvol" 00:08:40.810 ], 00:08:40.810 "product_name": "Logical Volume", 00:08:40.810 "block_size": 4096, 00:08:40.810 "num_blocks": 38912, 00:08:40.810 "uuid": "a5310378-a60a-4808-a6f1-166e96cc421d", 00:08:40.810 "assigned_rate_limits": { 00:08:40.810 "rw_ios_per_sec": 0, 00:08:40.810 "rw_mbytes_per_sec": 0, 00:08:40.810 "r_mbytes_per_sec": 0, 00:08:40.810 "w_mbytes_per_sec": 0 00:08:40.810 }, 00:08:40.810 "claimed": false, 00:08:40.810 "zoned": false, 00:08:40.810 "supported_io_types": { 00:08:40.810 "read": true, 00:08:40.810 "write": true, 00:08:40.810 "unmap": true, 00:08:40.810 "flush": false, 00:08:40.810 "reset": true, 00:08:40.810 "nvme_admin": false, 00:08:40.810 "nvme_io": false, 00:08:40.810 "nvme_io_md": false, 00:08:40.810 "write_zeroes": true, 00:08:40.810 "zcopy": false, 00:08:40.810 "get_zone_info": false, 00:08:40.810 "zone_management": false, 00:08:40.810 "zone_append": false, 00:08:40.810 "compare": false, 00:08:40.810 "compare_and_write": false, 00:08:40.810 "abort": false, 00:08:40.810 "seek_hole": true, 00:08:40.810 "seek_data": true, 00:08:40.810 "copy": false, 00:08:40.810 "nvme_iov_md": false 00:08:40.810 }, 00:08:40.810 "driver_specific": { 00:08:40.810 "lvol": { 00:08:40.810 "lvol_store_uuid": "c970b322-8c70-4590-89aa-7fb1bfed5450", 00:08:40.810 "base_bdev": "aio_bdev", 00:08:40.810 "thin_provision": false, 00:08:40.810 "num_allocated_clusters": 38, 00:08:40.810 "snapshot": false, 00:08:40.810 "clone": false, 00:08:40.810 "esnap_clone": false 00:08:40.810 } 00:08:40.810 } 00:08:40.810 } 00:08:40.810 ] 00:08:40.810 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:40.810 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c970b322-8c70-4590-89aa-7fb1bfed5450 00:08:40.810 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:41.068 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:41.068 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c970b322-8c70-4590-89aa-7fb1bfed5450 00:08:41.068 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:41.327 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:41.327 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a5310378-a60a-4808-a6f1-166e96cc421d 00:08:41.327 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c970b322-8c70-4590-89aa-7fb1bfed5450 00:08:41.585 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.844 00:08:41.844 real 0m16.743s 00:08:41.844 user 0m44.006s 00:08:41.844 sys 0m3.661s 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:41.844 ************************************ 00:08:41.844 END TEST lvs_grow_dirty 00:08:41.844 ************************************ 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:41.844 nvmf_trace.0 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:41.844 rmmod nvme_tcp 00:08:41.844 rmmod nvme_fabrics 00:08:41.844 rmmod nvme_keyring 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1493084 ']' 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1493084 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1493084 ']' 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1493084 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.844 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1493084 00:08:42.103 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.103 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.103 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1493084' 00:08:42.103 killing process with pid 1493084 00:08:42.103 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1493084 00:08:42.103 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1493084 00:08:42.103 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:42.103 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:42.103 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:42.103 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:42.103 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:42.103 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:42.103 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:42.103 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:42.103 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:42.103 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.103 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.103 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.638 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:44.638 00:08:44.638 real 0m42.822s 00:08:44.638 user 1m5.642s 00:08:44.638 sys 0m10.581s 00:08:44.638 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.638 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:44.638 ************************************ 00:08:44.638 END TEST nvmf_lvs_grow 00:08:44.638 ************************************ 00:08:44.638 14:10:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:44.638 14:10:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.638 14:10:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.638 14:10:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.638 ************************************ 00:08:44.638 START TEST nvmf_bdev_io_wait 00:08:44.638 ************************************ 00:08:44.638 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:44.638 * Looking for test storage... 00:08:44.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:44.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.638 --rc genhtml_branch_coverage=1 00:08:44.638 --rc genhtml_function_coverage=1 00:08:44.638 --rc genhtml_legend=1 00:08:44.638 --rc geninfo_all_blocks=1 00:08:44.638 --rc geninfo_unexecuted_blocks=1 00:08:44.638 00:08:44.638 ' 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:44.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.638 --rc genhtml_branch_coverage=1 00:08:44.638 --rc genhtml_function_coverage=1 00:08:44.638 --rc genhtml_legend=1 00:08:44.638 --rc geninfo_all_blocks=1 00:08:44.638 --rc geninfo_unexecuted_blocks=1 00:08:44.638 00:08:44.638 ' 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:44.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.638 --rc genhtml_branch_coverage=1 00:08:44.638 --rc genhtml_function_coverage=1 00:08:44.638 --rc genhtml_legend=1 00:08:44.638 --rc geninfo_all_blocks=1 00:08:44.638 --rc geninfo_unexecuted_blocks=1 00:08:44.638 00:08:44.638 ' 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:44.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.638 --rc genhtml_branch_coverage=1 00:08:44.638 --rc genhtml_function_coverage=1 00:08:44.638 --rc genhtml_legend=1 00:08:44.638 --rc geninfo_all_blocks=1 00:08:44.638 --rc geninfo_unexecuted_blocks=1 00:08:44.638 00:08:44.638 ' 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.638 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:44.639 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:51.213 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:51.213 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:51.213 Found net devices under 0000:af:00.0: cvl_0_0 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:51.213 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:51.214 Found net devices under 0000:af:00.1: cvl_0_1 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:51.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:08:51.214 00:08:51.214 --- 10.0.0.2 ping statistics --- 00:08:51.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.214 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:08:51.214 00:08:51.214 --- 10.0.0.1 ping statistics --- 00:08:51.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.214 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:51.214 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:51.473 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:51.473 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:51.473 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.473 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.473 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1497753 00:08:51.473 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1497753 00:08:51.473 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:51.473 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1497753 ']' 00:08:51.473 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.473 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.473 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.473 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.473 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.473 [2024-12-10 14:10:52.012335] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:08:51.473 [2024-12-10 14:10:52.012377] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.473 [2024-12-10 14:10:52.095922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.473 [2024-12-10 14:10:52.138738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.473 [2024-12-10 14:10:52.138775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.473 [2024-12-10 14:10:52.138781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.473 [2024-12-10 14:10:52.138787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.473 [2024-12-10 14:10:52.138792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.473 [2024-12-10 14:10:52.140186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.473 [2024-12-10 14:10:52.140299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.473 [2024-12-10 14:10:52.140340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.473 [2024-12-10 14:10:52.140341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.409 [2024-12-10 14:10:52.958142] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.409 Malloc0 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.409 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.409 [2024-12-10 14:10:53.013346] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1497867 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1497869 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.409 { 00:08:52.409 "params": { 00:08:52.409 "name": "Nvme$subsystem", 00:08:52.409 "trtype": "$TEST_TRANSPORT", 00:08:52.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.409 "adrfam": "ipv4", 00:08:52.409 "trsvcid": "$NVMF_PORT", 00:08:52.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.409 "hdgst": ${hdgst:-false}, 00:08:52.409 "ddgst": ${ddgst:-false} 00:08:52.409 }, 00:08:52.409 "method": "bdev_nvme_attach_controller" 00:08:52.409 } 00:08:52.409 EOF 00:08:52.409 )") 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1497871 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1497874 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.409 { 00:08:52.409 "params": { 00:08:52.409 "name": "Nvme$subsystem", 00:08:52.409 "trtype": "$TEST_TRANSPORT", 00:08:52.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.409 "adrfam": "ipv4", 00:08:52.409 "trsvcid": "$NVMF_PORT", 00:08:52.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.409 "hdgst": ${hdgst:-false}, 00:08:52.409 "ddgst": ${ddgst:-false} 00:08:52.409 }, 00:08:52.409 "method": "bdev_nvme_attach_controller" 00:08:52.409 } 00:08:52.409 EOF 00:08:52.409 )") 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.409 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.410 { 00:08:52.410 "params": { 00:08:52.410 "name": "Nvme$subsystem", 00:08:52.410 "trtype": "$TEST_TRANSPORT", 00:08:52.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.410 "adrfam": "ipv4", 00:08:52.410 "trsvcid": "$NVMF_PORT", 00:08:52.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.410 "hdgst": ${hdgst:-false}, 00:08:52.410 "ddgst": ${ddgst:-false} 00:08:52.410 }, 00:08:52.410 "method": "bdev_nvme_attach_controller" 00:08:52.410 } 00:08:52.410 EOF 00:08:52.410 )") 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.410 { 00:08:52.410 "params": { 00:08:52.410 "name": "Nvme$subsystem", 00:08:52.410 "trtype": "$TEST_TRANSPORT", 00:08:52.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.410 "adrfam": "ipv4", 00:08:52.410 "trsvcid": "$NVMF_PORT", 00:08:52.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.410 "hdgst": ${hdgst:-false}, 00:08:52.410 "ddgst": ${ddgst:-false} 00:08:52.410 }, 00:08:52.410 "method": "bdev_nvme_attach_controller" 00:08:52.410 } 00:08:52.410 EOF 00:08:52.410 )") 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1497867 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.410 "params": { 00:08:52.410 "name": "Nvme1", 00:08:52.410 "trtype": "tcp", 00:08:52.410 "traddr": "10.0.0.2", 00:08:52.410 "adrfam": "ipv4", 00:08:52.410 "trsvcid": "4420", 00:08:52.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.410 "hdgst": false, 00:08:52.410 "ddgst": false 00:08:52.410 }, 00:08:52.410 "method": "bdev_nvme_attach_controller" 00:08:52.410 }' 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.410 "params": { 00:08:52.410 "name": "Nvme1", 00:08:52.410 "trtype": "tcp", 00:08:52.410 "traddr": "10.0.0.2", 00:08:52.410 "adrfam": "ipv4", 00:08:52.410 "trsvcid": "4420", 00:08:52.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.410 "hdgst": false, 00:08:52.410 "ddgst": false 00:08:52.410 }, 00:08:52.410 "method": "bdev_nvme_attach_controller" 00:08:52.410 }' 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.410 "params": { 00:08:52.410 "name": "Nvme1", 00:08:52.410 "trtype": "tcp", 00:08:52.410 "traddr": "10.0.0.2", 00:08:52.410 "adrfam": "ipv4", 00:08:52.410 "trsvcid": "4420", 00:08:52.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.410 "hdgst": false, 00:08:52.410 "ddgst": false 00:08:52.410 }, 00:08:52.410 "method": "bdev_nvme_attach_controller" 00:08:52.410 }' 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.410 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.410 "params": { 00:08:52.410 "name": "Nvme1", 00:08:52.410 "trtype": "tcp", 00:08:52.410 "traddr": "10.0.0.2", 00:08:52.410 "adrfam": "ipv4", 00:08:52.410 "trsvcid": "4420", 00:08:52.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.410 "hdgst": false, 00:08:52.410 "ddgst": false 00:08:52.410 }, 00:08:52.410 "method": "bdev_nvme_attach_controller" 00:08:52.410 }' 00:08:52.410 [2024-12-10 14:10:53.064887] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:08:52.410 [2024-12-10 14:10:53.064936] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:52.410 [2024-12-10 14:10:53.065670] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:08:52.410 [2024-12-10 14:10:53.065714] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:52.410 [2024-12-10 14:10:53.067129] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:08:52.410 [2024-12-10 14:10:53.067173] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:52.410 [2024-12-10 14:10:53.070673] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:08:52.410 [2024-12-10 14:10:53.070712] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:52.670 [2024-12-10 14:10:53.269324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.670 [2024-12-10 14:10:53.314252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:52.670 [2024-12-10 14:10:53.370981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.928 [2024-12-10 14:10:53.419947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:52.928 [2024-12-10 14:10:53.423658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.929 [2024-12-10 14:10:53.466090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:52.929 [2024-12-10 14:10:53.483207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.929 [2024-12-10 14:10:53.523360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:52.929 Running I/O for 1 seconds... 00:08:52.929 Running I/O for 1 seconds... 00:08:53.188 Running I/O for 1 seconds... 00:08:53.188 Running I/O for 1 seconds... 00:08:54.125 7948.00 IOPS, 31.05 MiB/s 00:08:54.125 Latency(us) 00:08:54.125 [2024-12-10T13:10:54.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.125 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:54.125 Nvme1n1 : 1.02 7953.45 31.07 0.00 0.00 15993.24 6584.81 25465.42 00:08:54.125 [2024-12-10T13:10:54.865Z] =================================================================================================================== 00:08:54.125 [2024-12-10T13:10:54.865Z] Total : 7953.45 31.07 0.00 0.00 15993.24 6584.81 25465.42 00:08:54.125 11291.00 IOPS, 44.11 MiB/s 00:08:54.125 Latency(us) 00:08:54.125 [2024-12-10T13:10:54.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.125 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:54.125 Nvme1n1 : 1.01 11347.00 44.32 0.00 0.00 11239.40 5742.20 22968.81 00:08:54.125 [2024-12-10T13:10:54.865Z] =================================================================================================================== 00:08:54.125 [2024-12-10T13:10:54.865Z] Total : 11347.00 44.32 0.00 0.00 11239.40 5742.20 22968.81 00:08:54.125 8054.00 IOPS, 31.46 MiB/s 00:08:54.125 Latency(us) 00:08:54.125 [2024-12-10T13:10:54.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.125 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:54.125 Nvme1n1 : 1.01 8188.90 31.99 0.00 0.00 15598.47 2855.50 38697.45 00:08:54.125 [2024-12-10T13:10:54.865Z] =================================================================================================================== 00:08:54.125 [2024-12-10T13:10:54.865Z] Total : 8188.90 31.99 0.00 0.00 15598.47 2855.50 38697.45 00:08:54.125 242848.00 IOPS, 948.62 MiB/s 00:08:54.125 Latency(us) 00:08:54.125 [2024-12-10T13:10:54.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.125 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:54.125 Nvme1n1 : 1.00 242479.67 947.19 0.00 0.00 524.99 227.23 1505.77 00:08:54.125 [2024-12-10T13:10:54.866Z] =================================================================================================================== 00:08:54.126 [2024-12-10T13:10:54.866Z] Total : 242479.67 947.19 0.00 0.00 524.99 227.23 1505.77 00:08:54.126 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1497869 00:08:54.126 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1497871 00:08:54.126 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1497874 00:08:54.126 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.126 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.126 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.126 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.126 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:54.126 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:54.126 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.126 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:54.126 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.126 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:54.126 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.126 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.126 rmmod nvme_tcp 00:08:54.386 rmmod nvme_fabrics 00:08:54.386 rmmod nvme_keyring 00:08:54.386 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.386 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:54.386 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:54.386 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1497753 ']' 00:08:54.386 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1497753 00:08:54.386 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1497753 ']' 00:08:54.386 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1497753 00:08:54.386 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:54.386 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.386 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1497753 00:08:54.386 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.386 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.386 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1497753' 00:08:54.386 killing process with pid 1497753 00:08:54.386 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1497753 00:08:54.386 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1497753 00:08:54.386 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:54.386 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:54.386 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:54.386 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:54.386 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:54.386 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:54.386 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:54.386 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:54.386 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:54.386 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.386 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.386 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:56.926 00:08:56.926 real 0m12.263s 00:08:56.926 user 0m19.107s 00:08:56.926 sys 0m6.823s 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:56.926 ************************************ 00:08:56.926 END TEST nvmf_bdev_io_wait 00:08:56.926 ************************************ 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:56.926 ************************************ 00:08:56.926 START TEST nvmf_queue_depth 00:08:56.926 ************************************ 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:56.926 * Looking for test storage... 00:08:56.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:56.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.926 --rc genhtml_branch_coverage=1 00:08:56.926 --rc genhtml_function_coverage=1 00:08:56.926 --rc genhtml_legend=1 00:08:56.926 --rc geninfo_all_blocks=1 00:08:56.926 --rc geninfo_unexecuted_blocks=1 00:08:56.926 00:08:56.926 ' 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:56.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.926 --rc genhtml_branch_coverage=1 00:08:56.926 --rc genhtml_function_coverage=1 00:08:56.926 --rc genhtml_legend=1 00:08:56.926 --rc geninfo_all_blocks=1 00:08:56.926 --rc geninfo_unexecuted_blocks=1 00:08:56.926 00:08:56.926 ' 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:56.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.926 --rc genhtml_branch_coverage=1 00:08:56.926 --rc genhtml_function_coverage=1 00:08:56.926 --rc genhtml_legend=1 00:08:56.926 --rc geninfo_all_blocks=1 00:08:56.926 --rc geninfo_unexecuted_blocks=1 00:08:56.926 00:08:56.926 ' 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:56.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.926 --rc genhtml_branch_coverage=1 00:08:56.926 --rc genhtml_function_coverage=1 00:08:56.926 --rc genhtml_legend=1 00:08:56.926 --rc geninfo_all_blocks=1 00:08:56.926 --rc geninfo_unexecuted_blocks=1 00:08:56.926 00:08:56.926 ' 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.926 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:56.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:56.927 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.500 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:03.500 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:03.500 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:03.500 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:03.500 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:03.500 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:03.500 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:03.500 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:03.500 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:03.501 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:03.501 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:03.501 Found net devices under 0000:af:00.0: cvl_0_0 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:03.501 Found net devices under 0000:af:00.1: cvl_0_1 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.501 14:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.501 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.501 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.501 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:03.501 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.501 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.501 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.501 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:03.501 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:03.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:09:03.501 00:09:03.501 --- 10.0.0.2 ping statistics --- 00:09:03.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.501 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:09:03.501 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:09:03.501 00:09:03.501 --- 10.0.0.1 ping statistics --- 00:09:03.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.501 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:09:03.501 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.501 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:03.501 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:03.501 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.501 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:03.501 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:03.502 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.502 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:03.502 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:03.760 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:03.760 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:03.760 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:03.760 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.760 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1502196 00:09:03.760 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1502196 00:09:03.760 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:03.760 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1502196 ']' 00:09:03.760 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.760 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.760 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.760 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.761 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.761 [2024-12-10 14:11:04.311744] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:09:03.761 [2024-12-10 14:11:04.311792] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.761 [2024-12-10 14:11:04.399551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.761 [2024-12-10 14:11:04.438675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.761 [2024-12-10 14:11:04.438712] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.761 [2024-12-10 14:11:04.438719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.761 [2024-12-10 14:11:04.438725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.761 [2024-12-10 14:11:04.438730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.761 [2024-12-10 14:11:04.439271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:04.020 [2024-12-10 14:11:04.575140] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:04.020 Malloc0 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:04.020 [2024-12-10 14:11:04.625224] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1502364 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1502364 /var/tmp/bdevperf.sock 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1502364 ']' 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:04.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.020 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:04.020 [2024-12-10 14:11:04.676378] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:09:04.020 [2024-12-10 14:11:04.676418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502364 ] 00:09:04.020 [2024-12-10 14:11:04.755030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.279 [2024-12-10 14:11:04.795618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.279 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.279 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:04.279 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:04.279 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.279 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:04.279 NVMe0n1 00:09:04.279 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.279 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:04.536 Running I/O for 10 seconds... 00:09:06.406 12063.00 IOPS, 47.12 MiB/s [2024-12-10T13:11:08.522Z] 12288.00 IOPS, 48.00 MiB/s [2024-12-10T13:11:09.457Z] 12461.67 IOPS, 48.68 MiB/s [2024-12-10T13:11:10.456Z] 12535.75 IOPS, 48.97 MiB/s [2024-12-10T13:11:11.403Z] 12577.80 IOPS, 49.13 MiB/s [2024-12-10T13:11:12.338Z] 12608.67 IOPS, 49.25 MiB/s [2024-12-10T13:11:13.275Z] 12614.14 IOPS, 49.27 MiB/s [2024-12-10T13:11:14.210Z] 12622.75 IOPS, 49.31 MiB/s [2024-12-10T13:11:15.146Z] 12610.33 IOPS, 49.26 MiB/s [2024-12-10T13:11:15.405Z] 12616.20 IOPS, 49.28 MiB/s 00:09:14.665 Latency(us) 00:09:14.665 [2024-12-10T13:11:15.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.665 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:14.665 Verification LBA range: start 0x0 length 0x4000 00:09:14.665 NVMe0n1 : 10.05 12641.29 49.38 0.00 0.00 80720.44 10797.84 51679.82 00:09:14.665 [2024-12-10T13:11:15.405Z] =================================================================================================================== 00:09:14.665 [2024-12-10T13:11:15.405Z] Total : 12641.29 49.38 0.00 0.00 80720.44 10797.84 51679.82 00:09:14.665 { 00:09:14.665 "results": [ 00:09:14.665 { 00:09:14.665 "job": "NVMe0n1", 00:09:14.665 "core_mask": "0x1", 00:09:14.665 "workload": "verify", 00:09:14.665 "status": "finished", 00:09:14.665 "verify_range": { 00:09:14.665 "start": 0, 00:09:14.665 "length": 16384 00:09:14.665 }, 00:09:14.665 "queue_depth": 1024, 00:09:14.665 "io_size": 4096, 00:09:14.665 "runtime": 10.049133, 00:09:14.665 "iops": 12641.289552043943, 00:09:14.665 "mibps": 49.380037312671654, 00:09:14.665 "io_failed": 0, 00:09:14.665 "io_timeout": 0, 00:09:14.665 "avg_latency_us": 80720.4358622551, 00:09:14.665 "min_latency_us": 10797.83619047619, 00:09:14.665 "max_latency_us": 51679.817142857144 00:09:14.665 } 00:09:14.665 ], 00:09:14.665 "core_count": 1 00:09:14.665 } 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1502364 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1502364 ']' 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1502364 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1502364 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1502364' 00:09:14.665 killing process with pid 1502364 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1502364 00:09:14.665 Received shutdown signal, test time was about 10.000000 seconds 00:09:14.665 00:09:14.665 Latency(us) 00:09:14.665 [2024-12-10T13:11:15.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.665 [2024-12-10T13:11:15.405Z] =================================================================================================================== 00:09:14.665 [2024-12-10T13:11:15.405Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1502364 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.665 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.924 rmmod nvme_tcp 00:09:14.924 rmmod nvme_fabrics 00:09:14.924 rmmod nvme_keyring 00:09:14.924 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:14.924 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:14.924 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:14.924 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1502196 ']' 00:09:14.924 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1502196 00:09:14.924 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1502196 ']' 00:09:14.924 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1502196 00:09:14.924 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:14.924 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.924 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1502196 00:09:14.924 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:14.924 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:14.924 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1502196' 00:09:14.924 killing process with pid 1502196 00:09:14.924 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1502196 00:09:14.924 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1502196 00:09:15.184 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:15.184 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:15.184 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:15.184 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:15.184 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:15.184 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:15.184 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:15.184 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:15.184 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:15.184 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.184 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.184 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.089 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:17.089 00:09:17.089 real 0m20.522s 00:09:17.089 user 0m23.283s 00:09:17.089 sys 0m6.619s 00:09:17.089 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.089 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.089 ************************************ 00:09:17.089 END TEST nvmf_queue_depth 00:09:17.089 ************************************ 00:09:17.089 14:11:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:17.089 14:11:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:17.089 14:11:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.089 14:11:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:17.348 ************************************ 00:09:17.348 START TEST nvmf_target_multipath 00:09:17.348 ************************************ 00:09:17.348 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:17.348 * Looking for test storage... 00:09:17.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:17.349 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:17.349 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:17.349 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:17.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.349 --rc genhtml_branch_coverage=1 00:09:17.349 --rc genhtml_function_coverage=1 00:09:17.349 --rc genhtml_legend=1 00:09:17.349 --rc geninfo_all_blocks=1 00:09:17.349 --rc geninfo_unexecuted_blocks=1 00:09:17.349 00:09:17.349 ' 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:17.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.349 --rc genhtml_branch_coverage=1 00:09:17.349 --rc genhtml_function_coverage=1 00:09:17.349 --rc genhtml_legend=1 00:09:17.349 --rc geninfo_all_blocks=1 00:09:17.349 --rc geninfo_unexecuted_blocks=1 00:09:17.349 00:09:17.349 ' 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:17.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.349 --rc genhtml_branch_coverage=1 00:09:17.349 --rc genhtml_function_coverage=1 00:09:17.349 --rc genhtml_legend=1 00:09:17.349 --rc geninfo_all_blocks=1 00:09:17.349 --rc geninfo_unexecuted_blocks=1 00:09:17.349 00:09:17.349 ' 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:17.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.349 --rc genhtml_branch_coverage=1 00:09:17.349 --rc genhtml_function_coverage=1 00:09:17.349 --rc genhtml_legend=1 00:09:17.349 --rc geninfo_all_blocks=1 00:09:17.349 --rc geninfo_unexecuted_blocks=1 00:09:17.349 00:09:17.349 ' 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:17.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:17.349 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:17.350 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:17.350 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.350 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:17.350 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:17.350 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.350 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:17.350 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:17.350 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:17.350 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.350 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.350 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.350 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:17.350 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:17.350 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:17.350 14:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:23.918 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.918 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:23.918 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:23.918 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:23.918 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:23.918 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:23.918 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:23.918 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:23.918 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:23.919 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:23.919 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:23.919 Found net devices under 0000:af:00.0: cvl_0_0 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:23.919 Found net devices under 0000:af:00.1: cvl_0_1 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.919 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:24.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:09:24.179 00:09:24.179 --- 10.0.0.2 ping statistics --- 00:09:24.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.179 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:09:24.179 00:09:24.179 --- 10.0.0.1 ping statistics --- 00:09:24.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.179 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:24.179 only one NIC for nvmf test 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.179 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.179 rmmod nvme_tcp 00:09:24.179 rmmod nvme_fabrics 00:09:24.441 rmmod nvme_keyring 00:09:24.441 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.441 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:24.441 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:24.441 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:24.441 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.441 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:24.441 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:24.441 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:24.441 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:24.441 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:24.441 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:24.441 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:24.441 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:24.441 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.441 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.441 14:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.351 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.351 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:26.351 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:26.351 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:26.351 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:26.351 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:26.351 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:26.351 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.351 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.352 00:09:26.352 real 0m9.207s 00:09:26.352 user 0m2.098s 00:09:26.352 sys 0m5.148s 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.352 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:26.352 ************************************ 00:09:26.352 END TEST nvmf_target_multipath 00:09:26.352 ************************************ 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.612 ************************************ 00:09:26.612 START TEST nvmf_zcopy 00:09:26.612 ************************************ 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:26.612 * Looking for test storage... 00:09:26.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:26.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.612 --rc genhtml_branch_coverage=1 00:09:26.612 --rc genhtml_function_coverage=1 00:09:26.612 --rc genhtml_legend=1 00:09:26.612 --rc geninfo_all_blocks=1 00:09:26.612 --rc geninfo_unexecuted_blocks=1 00:09:26.612 00:09:26.612 ' 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:26.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.612 --rc genhtml_branch_coverage=1 00:09:26.612 --rc genhtml_function_coverage=1 00:09:26.612 --rc genhtml_legend=1 00:09:26.612 --rc geninfo_all_blocks=1 00:09:26.612 --rc geninfo_unexecuted_blocks=1 00:09:26.612 00:09:26.612 ' 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:26.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.612 --rc genhtml_branch_coverage=1 00:09:26.612 --rc genhtml_function_coverage=1 00:09:26.612 --rc genhtml_legend=1 00:09:26.612 --rc geninfo_all_blocks=1 00:09:26.612 --rc geninfo_unexecuted_blocks=1 00:09:26.612 00:09:26.612 ' 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:26.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.612 --rc genhtml_branch_coverage=1 00:09:26.612 --rc genhtml_function_coverage=1 00:09:26.612 --rc genhtml_legend=1 00:09:26.612 --rc geninfo_all_blocks=1 00:09:26.612 --rc geninfo_unexecuted_blocks=1 00:09:26.612 00:09:26.612 ' 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:26.612 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.613 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.872 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:26.872 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:26.872 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:26.872 14:11:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:33.444 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:33.444 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:33.444 Found net devices under 0000:af:00.0: cvl_0_0 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:33.444 Found net devices under 0000:af:00.1: cvl_0_1 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.444 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.444 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.444 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.444 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.444 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:09:33.444 00:09:33.444 --- 10.0.0.2 ping statistics --- 00:09:33.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.444 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:09:33.444 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:09:33.444 00:09:33.444 --- 10.0.0.1 ping statistics --- 00:09:33.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.444 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:09:33.444 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1511983 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1511983 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1511983 ']' 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.445 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.445 [2024-12-10 14:11:34.170058] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:09:33.445 [2024-12-10 14:11:34.170105] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.704 [2024-12-10 14:11:34.255364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.704 [2024-12-10 14:11:34.292452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.704 [2024-12-10 14:11:34.292482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.704 [2024-12-10 14:11:34.292489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.704 [2024-12-10 14:11:34.292494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.704 [2024-12-10 14:11:34.292499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.704 [2024-12-10 14:11:34.293015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.704 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.704 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:33.704 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:33.704 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:33.704 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.704 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.704 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:33.704 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:33.704 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.704 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.704 [2024-12-10 14:11:34.440435] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.964 [2024-12-10 14:11:34.456616] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.964 malloc0 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:33.964 { 00:09:33.964 "params": { 00:09:33.964 "name": "Nvme$subsystem", 00:09:33.964 "trtype": "$TEST_TRANSPORT", 00:09:33.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.964 "adrfam": "ipv4", 00:09:33.964 "trsvcid": "$NVMF_PORT", 00:09:33.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.964 "hdgst": ${hdgst:-false}, 00:09:33.964 "ddgst": ${ddgst:-false} 00:09:33.964 }, 00:09:33.964 "method": "bdev_nvme_attach_controller" 00:09:33.964 } 00:09:33.964 EOF 00:09:33.964 )") 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:33.964 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:33.964 "params": { 00:09:33.964 "name": "Nvme1", 00:09:33.964 "trtype": "tcp", 00:09:33.964 "traddr": "10.0.0.2", 00:09:33.964 "adrfam": "ipv4", 00:09:33.964 "trsvcid": "4420", 00:09:33.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.964 "hdgst": false, 00:09:33.964 "ddgst": false 00:09:33.964 }, 00:09:33.964 "method": "bdev_nvme_attach_controller" 00:09:33.964 }' 00:09:33.964 [2024-12-10 14:11:34.540214] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:09:33.964 [2024-12-10 14:11:34.540264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512005 ] 00:09:33.964 [2024-12-10 14:11:34.620740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.964 [2024-12-10 14:11:34.660552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.223 Running I/O for 10 seconds... 00:09:36.542 8759.00 IOPS, 68.43 MiB/s [2024-12-10T13:11:38.218Z] 8812.50 IOPS, 68.85 MiB/s [2024-12-10T13:11:39.155Z] 8851.33 IOPS, 69.15 MiB/s [2024-12-10T13:11:40.091Z] 8870.75 IOPS, 69.30 MiB/s [2024-12-10T13:11:41.028Z] 8867.60 IOPS, 69.28 MiB/s [2024-12-10T13:11:41.964Z] 8847.17 IOPS, 69.12 MiB/s [2024-12-10T13:11:42.919Z] 8850.86 IOPS, 69.15 MiB/s [2024-12-10T13:11:44.297Z] 8859.75 IOPS, 69.22 MiB/s [2024-12-10T13:11:45.234Z] 8865.78 IOPS, 69.26 MiB/s [2024-12-10T13:11:45.234Z] 8867.20 IOPS, 69.28 MiB/s 00:09:44.494 Latency(us) 00:09:44.494 [2024-12-10T13:11:45.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.494 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:44.494 Verification LBA range: start 0x0 length 0x1000 00:09:44.494 Nvme1n1 : 10.01 8870.33 69.30 0.00 0.00 14389.14 1817.84 24466.77 00:09:44.494 [2024-12-10T13:11:45.234Z] =================================================================================================================== 00:09:44.494 [2024-12-10T13:11:45.234Z] Total : 8870.33 69.30 0.00 0.00 14389.14 1817.84 24466.77 00:09:44.494 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1513815 00:09:44.494 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:44.494 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.494 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:44.494 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:44.494 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:44.494 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:44.494 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:44.494 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:44.494 { 00:09:44.494 "params": { 00:09:44.494 "name": "Nvme$subsystem", 00:09:44.494 "trtype": "$TEST_TRANSPORT", 00:09:44.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.494 "adrfam": "ipv4", 00:09:44.494 "trsvcid": "$NVMF_PORT", 00:09:44.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.494 "hdgst": ${hdgst:-false}, 00:09:44.494 "ddgst": ${ddgst:-false} 00:09:44.494 }, 00:09:44.494 "method": "bdev_nvme_attach_controller" 00:09:44.494 } 00:09:44.494 EOF 00:09:44.494 )") 00:09:44.494 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:44.494 [2024-12-10 14:11:45.053477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.494 [2024-12-10 14:11:45.053510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.494 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:44.494 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:44.494 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:44.494 "params": { 00:09:44.494 "name": "Nvme1", 00:09:44.494 "trtype": "tcp", 00:09:44.494 "traddr": "10.0.0.2", 00:09:44.494 "adrfam": "ipv4", 00:09:44.494 "trsvcid": "4420", 00:09:44.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.494 "hdgst": false, 00:09:44.494 "ddgst": false 00:09:44.494 }, 00:09:44.494 "method": "bdev_nvme_attach_controller" 00:09:44.494 }' 00:09:44.494 [2024-12-10 14:11:45.065479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.494 [2024-12-10 14:11:45.065491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.494 [2024-12-10 14:11:45.077503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.494 [2024-12-10 14:11:45.077513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.494 [2024-12-10 14:11:45.089535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.494 [2024-12-10 14:11:45.089545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.494 [2024-12-10 14:11:45.093308] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:09:44.494 [2024-12-10 14:11:45.093349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513815 ] 00:09:44.494 [2024-12-10 14:11:45.101570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.494 [2024-12-10 14:11:45.101587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.494 [2024-12-10 14:11:45.113601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.494 [2024-12-10 14:11:45.113610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.494 [2024-12-10 14:11:45.125631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.494 [2024-12-10 14:11:45.125640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.494 [2024-12-10 14:11:45.137664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.494 [2024-12-10 14:11:45.137673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.494 [2024-12-10 14:11:45.149695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.494 [2024-12-10 14:11:45.149703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.494 [2024-12-10 14:11:45.161727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.494 [2024-12-10 14:11:45.161736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.494 [2024-12-10 14:11:45.171615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.494 [2024-12-10 14:11:45.173757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.494 [2024-12-10 14:11:45.173766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.494 [2024-12-10 14:11:45.185793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.494 [2024-12-10 14:11:45.185807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.494 [2024-12-10 14:11:45.197822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.494 [2024-12-10 14:11:45.197831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.494 [2024-12-10 14:11:45.209865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.494 [2024-12-10 14:11:45.209882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.494 [2024-12-10 14:11:45.211558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.494 [2024-12-10 14:11:45.221895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.494 [2024-12-10 14:11:45.221907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.753 [2024-12-10 14:11:45.233928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.753 [2024-12-10 14:11:45.233946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.753 [2024-12-10 14:11:45.245955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.753 [2024-12-10 14:11:45.245969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.753 [2024-12-10 14:11:45.257982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.753 [2024-12-10 14:11:45.257995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.753 [2024-12-10 14:11:45.270019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.753 [2024-12-10 14:11:45.270034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.753 [2024-12-10 14:11:45.282049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.753 [2024-12-10 14:11:45.282062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.753 [2024-12-10 14:11:45.294075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.753 [2024-12-10 14:11:45.294083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.753 [2024-12-10 14:11:45.306126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.753 [2024-12-10 14:11:45.306145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.753 [2024-12-10 14:11:45.318153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.753 [2024-12-10 14:11:45.318171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.753 [2024-12-10 14:11:45.330188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.753 [2024-12-10 14:11:45.330202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.753 [2024-12-10 14:11:45.342222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.753 [2024-12-10 14:11:45.342236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.753 [2024-12-10 14:11:45.354266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.753 [2024-12-10 14:11:45.354278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.753 [2024-12-10 14:11:45.366303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.753 [2024-12-10 14:11:45.366321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.753 Running I/O for 5 seconds... 00:09:44.753 [2024-12-10 14:11:45.378315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.753 [2024-12-10 14:11:45.378324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.753 [2024-12-10 14:11:45.393410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.753 [2024-12-10 14:11:45.393431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.753 [2024-12-10 14:11:45.407115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.753 [2024-12-10 14:11:45.407134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.753 [2024-12-10 14:11:45.420907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.753 [2024-12-10 14:11:45.420926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.754 [2024-12-10 14:11:45.434588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.754 [2024-12-10 14:11:45.434606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.754 [2024-12-10 14:11:45.448237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.754 [2024-12-10 14:11:45.448257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.754 [2024-12-10 14:11:45.461722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.754 [2024-12-10 14:11:45.461741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.754 [2024-12-10 14:11:45.475301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.754 [2024-12-10 14:11:45.475319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.754 [2024-12-10 14:11:45.489322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.754 [2024-12-10 14:11:45.489341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.500345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.500363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.514443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.514465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.523231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.523249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.532571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.532590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.546642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.546661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.559981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.560009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.574044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.574064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.587764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.587783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.600847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.600866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.614578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.614598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.627777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.627796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.636907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.636925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.646381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.646399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.660729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.660747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.674340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.674359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.687685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.687703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.696596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.696615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.710738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.710757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.724061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.724081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.737755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.737774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.013 [2024-12-10 14:11:45.751099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.013 [2024-12-10 14:11:45.751119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.759895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.759914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.773927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.773945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.782685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.782704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.796295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.796320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.805180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.805199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.819986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.820010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.833570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.833588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.847192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.847211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.860781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.860800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.874365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.874384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.888610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.888628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.902550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.902568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.915832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.915849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.929964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.929982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.941234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.941268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.950507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.950526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.960006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.960024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.969191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.969208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.983175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.983193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:45.996855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:45.996874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.272 [2024-12-10 14:11:46.011013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.272 [2024-12-10 14:11:46.011032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.531 [2024-12-10 14:11:46.022102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.531 [2024-12-10 14:11:46.022121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.531 [2024-12-10 14:11:46.036394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.531 [2024-12-10 14:11:46.036416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.531 [2024-12-10 14:11:46.049872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.531 [2024-12-10 14:11:46.049890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.531 [2024-12-10 14:11:46.063829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.531 [2024-12-10 14:11:46.063848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.531 [2024-12-10 14:11:46.077800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.531 [2024-12-10 14:11:46.077819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.531 [2024-12-10 14:11:46.091664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.531 [2024-12-10 14:11:46.091683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.531 [2024-12-10 14:11:46.105200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.531 [2024-12-10 14:11:46.105224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.531 [2024-12-10 14:11:46.119178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.531 [2024-12-10 14:11:46.119196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-12-10 14:11:46.128127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-12-10 14:11:46.128144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-12-10 14:11:46.141976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-12-10 14:11:46.141994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-12-10 14:11:46.155656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-12-10 14:11:46.155677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-12-10 14:11:46.165198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-12-10 14:11:46.165215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-12-10 14:11:46.179007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-12-10 14:11:46.179025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-12-10 14:11:46.191906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-12-10 14:11:46.191924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-12-10 14:11:46.205866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-12-10 14:11:46.205885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-12-10 14:11:46.219480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-12-10 14:11:46.219498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-12-10 14:11:46.232830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-12-10 14:11:46.232848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-12-10 14:11:46.246193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-12-10 14:11:46.246210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-12-10 14:11:46.259871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-12-10 14:11:46.259889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.273417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.273435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.287278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.287297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.300847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.300866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.314336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.314353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.327584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.327603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.340964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.340982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.355092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.355110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.368936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.368954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 16994.00 IOPS, 132.77 MiB/s [2024-12-10T13:11:46.531Z] [2024-12-10 14:11:46.382678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.382696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.391518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.391535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.401156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.401173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.410297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.410315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.419618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.419635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.433738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.433756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.443556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.443573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.452853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.452870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.462134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.462152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.471727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.471744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.485470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.485488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.498685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.498703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.512287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.512305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.791 [2024-12-10 14:11:46.525716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.791 [2024-12-10 14:11:46.525733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.050 [2024-12-10 14:11:46.539737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.050 [2024-12-10 14:11:46.539755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.050 [2024-12-10 14:11:46.553644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.050 [2024-12-10 14:11:46.553663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.050 [2024-12-10 14:11:46.567053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.050 [2024-12-10 14:11:46.567071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.050 [2024-12-10 14:11:46.580903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.050 [2024-12-10 14:11:46.580921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.050 [2024-12-10 14:11:46.594755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.050 [2024-12-10 14:11:46.594773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.050 [2024-12-10 14:11:46.608140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.050 [2024-12-10 14:11:46.608157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.050 [2024-12-10 14:11:46.621557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.050 [2024-12-10 14:11:46.621576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.050 [2024-12-10 14:11:46.630368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.050 [2024-12-10 14:11:46.630385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.050 [2024-12-10 14:11:46.644244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.050 [2024-12-10 14:11:46.644261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.050 [2024-12-10 14:11:46.653390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.050 [2024-12-10 14:11:46.653408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.050 [2024-12-10 14:11:46.662794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.050 [2024-12-10 14:11:46.662811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.050 [2024-12-10 14:11:46.677037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.050 [2024-12-10 14:11:46.677056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.050 [2024-12-10 14:11:46.690894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.050 [2024-12-10 14:11:46.690911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.050 [2024-12-10 14:11:46.704873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.050 [2024-12-10 14:11:46.704891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.050 [2024-12-10 14:11:46.718596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.050 [2024-12-10 14:11:46.718614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.050 [2024-12-10 14:11:46.727966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.051 [2024-12-10 14:11:46.727983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.051 [2024-12-10 14:11:46.742170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.051 [2024-12-10 14:11:46.742192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.051 [2024-12-10 14:11:46.755640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.051 [2024-12-10 14:11:46.755659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.051 [2024-12-10 14:11:46.769380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.051 [2024-12-10 14:11:46.769399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.051 [2024-12-10 14:11:46.783075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.051 [2024-12-10 14:11:46.783093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:46.796560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:46.796578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:46.809957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:46.809977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:46.823214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:46.823241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:46.837163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:46.837181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:46.850673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:46.850691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:46.864436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:46.864454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:46.877891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:46.877909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:46.886791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:46.886809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:46.900762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:46.900782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:46.913943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:46.913961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:46.927770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:46.927789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:46.941154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:46.941172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:46.950524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:46.950543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:46.959612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:46.959630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:46.973902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:46.973922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:46.982537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:46.982560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:46.996970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:46.996993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:47.010503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:47.010521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:47.019216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:47.019239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:47.028932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:47.028950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.310 [2024-12-10 14:11:47.037687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.310 [2024-12-10 14:11:47.037705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.569 [2024-12-10 14:11:47.051769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.569 [2024-12-10 14:11:47.051788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.569 [2024-12-10 14:11:47.065179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.569 [2024-12-10 14:11:47.065198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.569 [2024-12-10 14:11:47.079232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.569 [2024-12-10 14:11:47.079251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.569 [2024-12-10 14:11:47.093188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.569 [2024-12-10 14:11:47.093207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.569 [2024-12-10 14:11:47.106704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.569 [2024-12-10 14:11:47.106724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.569 [2024-12-10 14:11:47.120927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.569 [2024-12-10 14:11:47.120946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.569 [2024-12-10 14:11:47.131645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.569 [2024-12-10 14:11:47.131665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.569 [2024-12-10 14:11:47.141307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.569 [2024-12-10 14:11:47.141325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.569 [2024-12-10 14:11:47.149865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.570 [2024-12-10 14:11:47.149882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.570 [2024-12-10 14:11:47.159043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.570 [2024-12-10 14:11:47.159061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.570 [2024-12-10 14:11:47.173137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.570 [2024-12-10 14:11:47.173155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.570 [2024-12-10 14:11:47.181979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.570 [2024-12-10 14:11:47.181997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.570 [2024-12-10 14:11:47.196244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.570 [2024-12-10 14:11:47.196263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.570 [2024-12-10 14:11:47.210005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.570 [2024-12-10 14:11:47.210029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.570 [2024-12-10 14:11:47.218950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.570 [2024-12-10 14:11:47.218969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.570 [2024-12-10 14:11:47.233003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.570 [2024-12-10 14:11:47.233023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.570 [2024-12-10 14:11:47.247025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.570 [2024-12-10 14:11:47.247044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.570 [2024-12-10 14:11:47.260855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.570 [2024-12-10 14:11:47.260872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.570 [2024-12-10 14:11:47.274443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.570 [2024-12-10 14:11:47.274461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.570 [2024-12-10 14:11:47.287650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.570 [2024-12-10 14:11:47.287668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.570 [2024-12-10 14:11:47.301223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.570 [2024-12-10 14:11:47.301241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.314756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.314774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.323703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.323720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.337637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.337654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.346764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.346783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.360581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.360599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.374007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.374025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 17093.50 IOPS, 133.54 MiB/s [2024-12-10T13:11:47.569Z] [2024-12-10 14:11:47.387656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.387673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.401583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.401601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.415388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.415406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.429229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.429246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.437995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.438014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.447077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.447094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.461286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.461304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.474972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.474990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.488794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.488813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.502352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.502370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.511153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.511171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.520189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.520207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.529416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.529433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.543560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.543577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.829 [2024-12-10 14:11:47.557146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.829 [2024-12-10 14:11:47.557165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.088 [2024-12-10 14:11:47.570975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.088 [2024-12-10 14:11:47.570993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.088 [2024-12-10 14:11:47.579756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.088 [2024-12-10 14:11:47.579774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.088 [2024-12-10 14:11:47.593815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.088 [2024-12-10 14:11:47.593833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.088 [2024-12-10 14:11:47.607657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.088 [2024-12-10 14:11:47.607675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.088 [2024-12-10 14:11:47.621483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.088 [2024-12-10 14:11:47.621502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.088 [2024-12-10 14:11:47.635088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.088 [2024-12-10 14:11:47.635106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.088 [2024-12-10 14:11:47.649232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.089 [2024-12-10 14:11:47.649251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.089 [2024-12-10 14:11:47.662727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.089 [2024-12-10 14:11:47.662744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.089 [2024-12-10 14:11:47.676256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.089 [2024-12-10 14:11:47.676274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.089 [2024-12-10 14:11:47.690516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.089 [2024-12-10 14:11:47.690537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.089 [2024-12-10 14:11:47.705993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.089 [2024-12-10 14:11:47.706011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.089 [2024-12-10 14:11:47.719603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.089 [2024-12-10 14:11:47.719620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.089 [2024-12-10 14:11:47.732919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.089 [2024-12-10 14:11:47.732937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.089 [2024-12-10 14:11:47.746681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.089 [2024-12-10 14:11:47.746699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.089 [2024-12-10 14:11:47.760329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.089 [2024-12-10 14:11:47.760347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.089 [2024-12-10 14:11:47.774151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.089 [2024-12-10 14:11:47.774169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.089 [2024-12-10 14:11:47.787416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.089 [2024-12-10 14:11:47.787434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.089 [2024-12-10 14:11:47.797314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.089 [2024-12-10 14:11:47.797332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.089 [2024-12-10 14:11:47.811079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.089 [2024-12-10 14:11:47.811097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.089 [2024-12-10 14:11:47.819940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.089 [2024-12-10 14:11:47.819958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:47.829235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:47.829254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:47.843403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:47.843422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:47.857367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:47.857386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:47.871395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:47.871414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:47.880245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:47.880263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:47.889708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:47.889727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:47.903819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:47.903838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:47.916891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:47.916910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:47.931082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:47.931101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:47.944539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:47.944557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:47.957948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:47.957966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:47.971852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:47.971869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:47.980527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:47.980545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:47.994667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:47.994685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:48.008243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:48.008260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:48.022205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:48.022228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:48.035526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:48.035544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:48.049866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:48.049883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:48.060156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:48.060173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:48.069544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:48.069562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.348 [2024-12-10 14:11:48.079314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.348 [2024-12-10 14:11:48.079332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.093760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.093779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.102641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.102659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.116583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.116602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.130224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.130241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.143864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.143883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.157620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.157638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.171211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.171234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.184398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.184420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.197918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.197936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.211411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.211429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.225175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.225192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.238635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.238653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.252253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.252270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.266065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.266083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.275275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.275294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.284589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.284607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.298814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.298834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.312692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.312712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.326597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.326616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.607 [2024-12-10 14:11:48.340444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.607 [2024-12-10 14:11:48.340464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.354455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.354473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.368270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.368291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 17128.67 IOPS, 133.82 MiB/s [2024-12-10T13:11:48.607Z] [2024-12-10 14:11:48.382437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.382455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.393820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.393839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.407708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.407732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.416414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.416432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.430745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.430763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.444471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.444489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.458309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.458328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.471929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.471948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.485177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.485196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.498426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.498445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.512178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.512196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.526031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.526050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.534710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.534728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.549279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.549297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.562995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.563012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.576831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.576850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.590314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.590332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.867 [2024-12-10 14:11:48.604554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.867 [2024-12-10 14:11:48.604572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.618027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.618045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.631525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.631544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.644597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.644616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.658558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.658580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.672147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.672166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.680917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.680935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.695129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.695147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.708279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.708297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.721522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.721540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.735130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.735148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.749038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.749057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.762690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.762707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.776375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.776392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.789663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.789680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.803102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.803120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.817163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.817181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.830826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.830844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.844629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.844647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.126 [2024-12-10 14:11:48.858663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.126 [2024-12-10 14:11:48.858681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:48.872642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:48.872660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:48.885777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:48.885797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:48.899828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:48.899846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:48.913502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:48.913525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:48.926975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:48.926993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:48.940743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:48.940761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:48.954549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:48.954567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:48.968214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:48.968238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:48.981826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:48.981845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:48.995674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:48.995692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:49.009490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:49.009508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:49.023174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:49.023191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:49.036690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:49.036708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:49.050700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:49.050717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:49.060040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:49.060058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:49.074644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:49.074663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:49.083487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:49.083505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:49.097896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:49.097914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:49.111316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:49.111334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.386 [2024-12-10 14:11:49.125445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.386 [2024-12-10 14:11:49.125463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-10 14:11:49.136229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-10 14:11:49.136263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-10 14:11:49.145658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-10 14:11:49.145676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-10 14:11:49.160081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-10 14:11:49.160100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-10 14:11:49.173971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-10 14:11:49.173990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-10 14:11:49.187309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-10 14:11:49.187327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-10 14:11:49.201056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-10 14:11:49.201073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-10 14:11:49.215147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-10 14:11:49.215166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-10 14:11:49.226387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-10 14:11:49.226406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-10 14:11:49.241097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-10 14:11:49.241115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-10 14:11:49.255098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.646 [2024-12-10 14:11:49.255116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.646 [2024-12-10 14:11:49.263906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.646 [2024-12-10 14:11:49.263923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.646 [2024-12-10 14:11:49.278544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.646 [2024-12-10 14:11:49.278562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.646 [2024-12-10 14:11:49.287602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.646 [2024-12-10 14:11:49.287620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.646 [2024-12-10 14:11:49.301708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.646 [2024-12-10 14:11:49.301727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.646 [2024-12-10 14:11:49.315189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.646 [2024-12-10 14:11:49.315208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.646 [2024-12-10 14:11:49.324000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.646 [2024-12-10 14:11:49.324017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.646 [2024-12-10 14:11:49.333214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.646 [2024-12-10 14:11:49.333236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.646 [2024-12-10 14:11:49.347223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.646 [2024-12-10 14:11:49.347241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.646 [2024-12-10 14:11:49.361034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.646 [2024-12-10 14:11:49.361052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.646 [2024-12-10 14:11:49.375023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.646 [2024-12-10 14:11:49.375041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 17124.50 IOPS, 133.79 MiB/s [2024-12-10T13:11:49.645Z] [2024-12-10 14:11:49.388534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-10 14:11:49.388552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-10 14:11:49.401785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-10 14:11:49.401803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-10 14:11:49.410600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-10 14:11:49.410617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-10 14:11:49.425188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-10 14:11:49.425207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-10 14:11:49.439114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-10 14:11:49.439132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-10 14:11:49.452485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-10 14:11:49.452502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-10 14:11:49.461366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-10 14:11:49.461383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-10 14:11:49.470609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-10 14:11:49.470626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.906 [2024-12-10 14:11:49.485347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.906 [2024-12-10 14:11:49.485365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.906 [2024-12-10 14:11:49.495870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.906 [2024-12-10 14:11:49.495888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.906 [2024-12-10 14:11:49.509810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.906 [2024-12-10 14:11:49.509827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.906 [2024-12-10 14:11:49.523034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.906 [2024-12-10 14:11:49.523052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.906 [2024-12-10 14:11:49.536443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.906 [2024-12-10 14:11:49.536461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.906 [2024-12-10 14:11:49.550561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.906 [2024-12-10 14:11:49.550579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.906 [2024-12-10 14:11:49.564706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.906 [2024-12-10 14:11:49.564723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.906 [2024-12-10 14:11:49.575574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.906 [2024-12-10 14:11:49.575592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.906 [2024-12-10 14:11:49.589562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.906 [2024-12-10 14:11:49.589580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.906 [2024-12-10 14:11:49.603232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.906 [2024-12-10 14:11:49.603250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.906 [2024-12-10 14:11:49.616832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.906 [2024-12-10 14:11:49.616850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.906 [2024-12-10 14:11:49.630768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.906 [2024-12-10 14:11:49.630791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.906 [2024-12-10 14:11:49.644395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.906 [2024-12-10 14:11:49.644414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.658027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.658046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.666929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.666947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.680866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.680885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.689706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.689724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.704061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.704079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.717825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.717844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.731305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.731324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.744570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.744589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.753424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.753442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.768130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.768150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.778665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.778683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.792903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.792921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.806781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.806800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.820093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.820111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.833562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.833580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.847023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.847042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.860482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.860501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.874535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.874558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.888527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.888544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.165 [2024-12-10 14:11:49.902618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.165 [2024-12-10 14:11:49.902637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-10 14:11:49.913632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-10 14:11:49.913651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-10 14:11:49.927822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-10 14:11:49.927840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-10 14:11:49.940796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-10 14:11:49.940814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-10 14:11:49.950302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-10 14:11:49.950321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-10 14:11:49.964138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-10 14:11:49.964155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-10 14:11:49.972962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-10 14:11:49.972979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-10 14:11:49.986779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-10 14:11:49.986797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-10 14:11:50.000308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-10 14:11:50.000326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-10 14:11:50.013815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-10 14:11:50.013833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-10 14:11:50.028557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-10 14:11:50.028575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-10 14:11:50.043443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-10 14:11:50.043463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-10 14:11:50.053537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-10 14:11:50.053557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-10 14:11:50.067824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-10 14:11:50.067843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-10 14:11:50.081482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-10 14:11:50.081500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-10 14:11:50.095863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-10 14:11:50.095882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-10 14:11:50.106446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-10 14:11:50.106464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-10 14:11:50.115821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-10 14:11:50.115843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-10 14:11:50.129937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-10 14:11:50.129955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-10 14:11:50.143747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-10 14:11:50.143765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-10 14:11:50.157342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-10 14:11:50.157361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.171246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.171264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.184986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.185004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.198638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.198656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.212360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.212378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.226224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.226242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.239851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.239869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.253557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.253575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.267168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.267186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.281283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.281302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.295508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.295526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.309478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.309496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.322956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.322975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.336775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.336793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.350729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.350747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.364834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.364852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.373858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.373879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.383233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.383251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 17114.20 IOPS, 133.70 MiB/s 00:09:49.684 Latency(us) 00:09:49.684 [2024-12-10T13:11:50.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.684 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:49.684 Nvme1n1 : 5.01 17120.76 133.76 0.00 0.00 7470.35 3448.44 18474.91 00:09:49.684 [2024-12-10T13:11:50.424Z] =================================================================================================================== 00:09:49.684 [2024-12-10T13:11:50.424Z] Total : 17120.76 133.76 0.00 0.00 7470.35 3448.44 18474.91 00:09:49.684 [2024-12-10 14:11:50.393607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.393624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.405639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.405654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.684 [2024-12-10 14:11:50.417687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.684 [2024-12-10 14:11:50.417706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-12-10 14:11:50.429710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-12-10 14:11:50.429727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-12-10 14:11:50.441739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-12-10 14:11:50.441755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-12-10 14:11:50.453763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-12-10 14:11:50.453776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-12-10 14:11:50.465796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-12-10 14:11:50.465812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-12-10 14:11:50.477833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-12-10 14:11:50.477850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-12-10 14:11:50.489862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-12-10 14:11:50.489877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-12-10 14:11:50.501894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-12-10 14:11:50.501910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-12-10 14:11:50.513916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-12-10 14:11:50.513925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-12-10 14:11:50.525957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.944 [2024-12-10 14:11:50.525970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.944 [2024-12-10 14:11:50.537997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.944 [2024-12-10 14:11:50.538006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.944 [2024-12-10 14:11:50.550017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.944 [2024-12-10 14:11:50.550026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1513815) - No such process 00:09:49.944 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1513815 00:09:49.944 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.944 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.944 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.944 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.944 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:49.944 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.944 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.944 delay0 00:09:49.944 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.944 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:49.944 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.944 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.944 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.944 14:11:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:50.202 [2024-12-10 14:11:50.702886] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:56.766 [2024-12-10 14:11:56.953363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1375c60 is same with the state(6) to be set 00:09:56.766 Initializing NVMe Controllers 00:09:56.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:56.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:56.766 Initialization complete. Launching workers. 00:09:56.766 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1021 00:09:56.766 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1297, failed to submit 44 00:09:56.766 success 1119, unsuccessful 178, failed 0 00:09:56.766 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:56.766 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:56.766 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:56.766 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:56.766 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:56.766 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:56.766 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.766 14:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:56.766 rmmod nvme_tcp 00:09:56.766 rmmod nvme_fabrics 00:09:56.766 rmmod nvme_keyring 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1511983 ']' 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1511983 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1511983 ']' 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1511983 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1511983 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1511983' 00:09:56.766 killing process with pid 1511983 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1511983 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1511983 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.766 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.672 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:58.672 00:09:58.672 real 0m32.177s 00:09:58.672 user 0m41.960s 00:09:58.672 sys 0m11.849s 00:09:58.672 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.672 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.672 ************************************ 00:09:58.672 END TEST nvmf_zcopy 00:09:58.672 ************************************ 00:09:58.672 14:11:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:58.672 14:11:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:58.672 14:11:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.672 14:11:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.672 ************************************ 00:09:58.672 START TEST nvmf_nmic 00:09:58.672 ************************************ 00:09:58.672 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:58.931 * Looking for test storage... 00:09:58.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:58.931 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:58.931 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:58.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.932 --rc genhtml_branch_coverage=1 00:09:58.932 --rc genhtml_function_coverage=1 00:09:58.932 --rc genhtml_legend=1 00:09:58.932 --rc geninfo_all_blocks=1 00:09:58.932 --rc geninfo_unexecuted_blocks=1 00:09:58.932 00:09:58.932 ' 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:58.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.932 --rc genhtml_branch_coverage=1 00:09:58.932 --rc genhtml_function_coverage=1 00:09:58.932 --rc genhtml_legend=1 00:09:58.932 --rc geninfo_all_blocks=1 00:09:58.932 --rc geninfo_unexecuted_blocks=1 00:09:58.932 00:09:58.932 ' 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:58.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.932 --rc genhtml_branch_coverage=1 00:09:58.932 --rc genhtml_function_coverage=1 00:09:58.932 --rc genhtml_legend=1 00:09:58.932 --rc geninfo_all_blocks=1 00:09:58.932 --rc geninfo_unexecuted_blocks=1 00:09:58.932 00:09:58.932 ' 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:58.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.932 --rc genhtml_branch_coverage=1 00:09:58.932 --rc genhtml_function_coverage=1 00:09:58.932 --rc genhtml_legend=1 00:09:58.932 --rc geninfo_all_blocks=1 00:09:58.932 --rc geninfo_unexecuted_blocks=1 00:09:58.932 00:09:58.932 ' 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:58.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.932 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.933 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:58.933 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:58.933 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:58.933 14:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.502 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:05.502 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:05.502 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:05.502 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:05.502 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:05.502 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:05.503 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:05.503 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:05.503 Found net devices under 0000:af:00.0: cvl_0_0 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:05.503 Found net devices under 0000:af:00.1: cvl_0_1 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:05.503 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:05.762 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:05.762 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:05.762 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:05.762 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:05.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:10:05.762 00:10:05.762 --- 10.0.0.2 ping statistics --- 00:10:05.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.763 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:05.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:10:05.763 00:10:05.763 --- 10.0.0.1 ping statistics --- 00:10:05.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.763 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1520019 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1520019 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1520019 ']' 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.763 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.763 [2024-12-10 14:12:06.437910] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:10:05.763 [2024-12-10 14:12:06.437954] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.022 [2024-12-10 14:12:06.524000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.022 [2024-12-10 14:12:06.565757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.022 [2024-12-10 14:12:06.565793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.022 [2024-12-10 14:12:06.565801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.022 [2024-12-10 14:12:06.565807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.022 [2024-12-10 14:12:06.565812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.022 [2024-12-10 14:12:06.567379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.022 [2024-12-10 14:12:06.567414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.022 [2024-12-10 14:12:06.567434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.022 [2024-12-10 14:12:06.567435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.022 [2024-12-10 14:12:06.700681] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.022 Malloc0 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.022 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.022 [2024-12-10 14:12:06.760757] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:06.281 test case1: single bdev can't be used in multiple subsystems 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.281 [2024-12-10 14:12:06.784662] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:06.281 [2024-12-10 14:12:06.784681] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:06.281 [2024-12-10 14:12:06.784688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.281 request: 00:10:06.281 { 00:10:06.281 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:06.281 "namespace": { 00:10:06.281 "bdev_name": "Malloc0", 00:10:06.281 "no_auto_visible": false, 00:10:06.281 "hide_metadata": false 00:10:06.281 }, 00:10:06.281 "method": "nvmf_subsystem_add_ns", 00:10:06.281 "req_id": 1 00:10:06.281 } 00:10:06.281 Got JSON-RPC error response 00:10:06.281 response: 00:10:06.281 { 00:10:06.281 "code": -32602, 00:10:06.281 "message": "Invalid parameters" 00:10:06.281 } 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:06.281 Adding namespace failed - expected result. 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:06.281 test case2: host connect to nvmf target in multiple paths 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.281 [2024-12-10 14:12:06.796786] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.281 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:07.215 14:12:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:08.592 14:12:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:08.592 14:12:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:08.592 14:12:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.592 14:12:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:08.592 14:12:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:10.496 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:10.496 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:10.496 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:10.496 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:10.496 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:10.496 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:10.496 14:12:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:10.496 [global] 00:10:10.496 thread=1 00:10:10.496 invalidate=1 00:10:10.496 rw=write 00:10:10.496 time_based=1 00:10:10.496 runtime=1 00:10:10.496 ioengine=libaio 00:10:10.496 direct=1 00:10:10.496 bs=4096 00:10:10.496 iodepth=1 00:10:10.496 norandommap=0 00:10:10.496 numjobs=1 00:10:10.496 00:10:10.496 verify_dump=1 00:10:10.496 verify_backlog=512 00:10:10.496 verify_state_save=0 00:10:10.496 do_verify=1 00:10:10.496 verify=crc32c-intel 00:10:10.496 [job0] 00:10:10.496 filename=/dev/nvme0n1 00:10:10.496 Could not set queue depth (nvme0n1) 00:10:10.754 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.754 fio-3.35 00:10:10.754 Starting 1 thread 00:10:12.246 00:10:12.246 job0: (groupid=0, jobs=1): err= 0: pid=1521211: Tue Dec 10 14:12:12 2024 00:10:12.246 read: IOPS=22, BW=90.3KiB/s (92.5kB/s)(92.0KiB/1019msec) 00:10:12.246 slat (nsec): min=9618, max=24250, avg=22703.91, stdev=2888.96 00:10:12.246 clat (usec): min=40738, max=41966, avg=41095.35, stdev=346.58 00:10:12.246 lat (usec): min=40748, max=41988, avg=41118.05, stdev=347.16 00:10:12.246 clat percentiles (usec): 00:10:12.246 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:12.246 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:12.246 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:10:12.246 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:12.246 | 99.99th=[42206] 00:10:12.246 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:10:12.246 slat (nsec): min=9376, max=44343, avg=10667.17, stdev=2067.92 00:10:12.246 clat (usec): min=116, max=364, avg=130.50, stdev=15.95 00:10:12.246 lat (usec): min=126, max=409, avg=141.17, stdev=17.07 00:10:12.246 clat percentiles (usec): 00:10:12.246 | 1.00th=[ 119], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 124], 00:10:12.246 | 30.00th=[ 125], 40.00th=[ 126], 50.00th=[ 127], 60.00th=[ 128], 00:10:12.246 | 70.00th=[ 130], 80.00th=[ 133], 90.00th=[ 145], 95.00th=[ 161], 00:10:12.246 | 99.00th=[ 176], 99.50th=[ 178], 99.90th=[ 367], 99.95th=[ 367], 00:10:12.246 | 99.99th=[ 367] 00:10:12.246 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:12.246 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:12.246 lat (usec) : 250=95.51%, 500=0.19% 00:10:12.246 lat (msec) : 50=4.30% 00:10:12.246 cpu : usr=0.29%, sys=0.49%, ctx=535, majf=0, minf=1 00:10:12.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.246 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.246 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.246 00:10:12.246 Run status group 0 (all jobs): 00:10:12.246 READ: bw=90.3KiB/s (92.5kB/s), 90.3KiB/s-90.3KiB/s (92.5kB/s-92.5kB/s), io=92.0KiB (94.2kB), run=1019-1019msec 00:10:12.246 WRITE: bw=2010KiB/s (2058kB/s), 2010KiB/s-2010KiB/s (2058kB/s-2058kB/s), io=2048KiB (2097kB), run=1019-1019msec 00:10:12.246 00:10:12.246 Disk stats (read/write): 00:10:12.246 nvme0n1: ios=70/512, merge=0/0, ticks=836/63, in_queue=899, util=91.38% 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:12.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:12.246 rmmod nvme_tcp 00:10:12.246 rmmod nvme_fabrics 00:10:12.246 rmmod nvme_keyring 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1520019 ']' 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1520019 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1520019 ']' 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1520019 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1520019 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1520019' 00:10:12.246 killing process with pid 1520019 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1520019 00:10:12.246 14:12:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1520019 00:10:12.506 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:12.506 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:12.506 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:12.506 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:12.506 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:12.506 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:12.506 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:12.506 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:12.506 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:12.506 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.506 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.506 14:12:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.411 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:14.411 00:10:14.411 real 0m15.749s 00:10:14.411 user 0m33.258s 00:10:14.411 sys 0m5.851s 00:10:14.411 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.411 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.411 ************************************ 00:10:14.411 END TEST nvmf_nmic 00:10:14.411 ************************************ 00:10:14.669 14:12:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:14.669 14:12:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:14.669 14:12:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.669 14:12:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.669 ************************************ 00:10:14.669 START TEST nvmf_fio_target 00:10:14.669 ************************************ 00:10:14.669 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:14.669 * Looking for test storage... 00:10:14.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:14.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.670 --rc genhtml_branch_coverage=1 00:10:14.670 --rc genhtml_function_coverage=1 00:10:14.670 --rc genhtml_legend=1 00:10:14.670 --rc geninfo_all_blocks=1 00:10:14.670 --rc geninfo_unexecuted_blocks=1 00:10:14.670 00:10:14.670 ' 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:14.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.670 --rc genhtml_branch_coverage=1 00:10:14.670 --rc genhtml_function_coverage=1 00:10:14.670 --rc genhtml_legend=1 00:10:14.670 --rc geninfo_all_blocks=1 00:10:14.670 --rc geninfo_unexecuted_blocks=1 00:10:14.670 00:10:14.670 ' 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:14.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.670 --rc genhtml_branch_coverage=1 00:10:14.670 --rc genhtml_function_coverage=1 00:10:14.670 --rc genhtml_legend=1 00:10:14.670 --rc geninfo_all_blocks=1 00:10:14.670 --rc geninfo_unexecuted_blocks=1 00:10:14.670 00:10:14.670 ' 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:14.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.670 --rc genhtml_branch_coverage=1 00:10:14.670 --rc genhtml_function_coverage=1 00:10:14.670 --rc genhtml_legend=1 00:10:14.670 --rc geninfo_all_blocks=1 00:10:14.670 --rc geninfo_unexecuted_blocks=1 00:10:14.670 00:10:14.670 ' 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.670 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:14.929 14:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.499 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.500 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.500 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.500 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:21.500 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.500 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.500 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.500 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.500 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:21.500 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:21.500 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:21.500 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:21.500 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:21.500 14:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:21.500 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:21.500 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:21.500 Found net devices under 0000:af:00.0: cvl_0_0 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:21.500 Found net devices under 0000:af:00.1: cvl_0_1 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:21.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:10:21.500 00:10:21.500 --- 10.0.0.2 ping statistics --- 00:10:21.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.500 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:10:21.500 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:10:21.759 00:10:21.759 --- 10.0.0.1 ping statistics --- 00:10:21.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.759 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:10:21.759 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.759 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1525462 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1525462 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1525462 ']' 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.760 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.760 [2024-12-10 14:12:22.343498] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:10:21.760 [2024-12-10 14:12:22.343541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.760 [2024-12-10 14:12:22.426552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.760 [2024-12-10 14:12:22.467958] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.760 [2024-12-10 14:12:22.467995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.760 [2024-12-10 14:12:22.468002] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.760 [2024-12-10 14:12:22.468008] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.760 [2024-12-10 14:12:22.468012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.760 [2024-12-10 14:12:22.469543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.760 [2024-12-10 14:12:22.469573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.760 [2024-12-10 14:12:22.469680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.760 [2024-12-10 14:12:22.469681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.696 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.696 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:22.696 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:22.696 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:22.696 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.696 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.696 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:22.696 [2024-12-10 14:12:23.405690] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.954 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.954 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:22.954 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.213 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:23.213 14:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.472 14:12:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:23.472 14:12:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.731 14:12:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:23.731 14:12:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:23.990 14:12:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.990 14:12:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:23.990 14:12:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.247 14:12:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:24.247 14:12:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.505 14:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:24.505 14:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:24.764 14:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:25.022 14:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:25.023 14:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:25.023 14:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:25.023 14:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:25.281 14:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.539 [2024-12-10 14:12:26.091306] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.539 14:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:25.797 14:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:25.797 14:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:27.172 14:12:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:27.172 14:12:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:27.172 14:12:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:27.172 14:12:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:27.172 14:12:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:27.172 14:12:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:29.075 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:29.075 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:29.075 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:29.075 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:29.075 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:29.075 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:29.075 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:29.075 [global] 00:10:29.075 thread=1 00:10:29.075 invalidate=1 00:10:29.075 rw=write 00:10:29.075 time_based=1 00:10:29.075 runtime=1 00:10:29.075 ioengine=libaio 00:10:29.075 direct=1 00:10:29.075 bs=4096 00:10:29.075 iodepth=1 00:10:29.075 norandommap=0 00:10:29.075 numjobs=1 00:10:29.075 00:10:29.075 verify_dump=1 00:10:29.075 verify_backlog=512 00:10:29.075 verify_state_save=0 00:10:29.075 do_verify=1 00:10:29.075 verify=crc32c-intel 00:10:29.075 [job0] 00:10:29.075 filename=/dev/nvme0n1 00:10:29.075 [job1] 00:10:29.075 filename=/dev/nvme0n2 00:10:29.075 [job2] 00:10:29.075 filename=/dev/nvme0n3 00:10:29.075 [job3] 00:10:29.075 filename=/dev/nvme0n4 00:10:29.332 Could not set queue depth (nvme0n1) 00:10:29.332 Could not set queue depth (nvme0n2) 00:10:29.332 Could not set queue depth (nvme0n3) 00:10:29.332 Could not set queue depth (nvme0n4) 00:10:29.590 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.590 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.590 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.590 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.590 fio-3.35 00:10:29.590 Starting 4 threads 00:10:30.966 00:10:30.966 job0: (groupid=0, jobs=1): err= 0: pid=1526872: Tue Dec 10 14:12:31 2024 00:10:30.966 read: IOPS=2475, BW=9902KiB/s (10.1MB/s)(9912KiB/1001msec) 00:10:30.966 slat (nsec): min=6807, max=39285, avg=7935.98, stdev=1461.90 00:10:30.966 clat (usec): min=170, max=292, avg=215.91, stdev=21.87 00:10:30.966 lat (usec): min=178, max=300, avg=223.85, stdev=21.88 00:10:30.966 clat percentiles (usec): 00:10:30.966 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 198], 00:10:30.966 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:10:30.966 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 258], 00:10:30.966 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 281], 99.95th=[ 289], 00:10:30.966 | 99.99th=[ 293] 00:10:30.966 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:30.966 slat (usec): min=9, max=115, avg=11.05, stdev= 2.56 00:10:30.966 clat (usec): min=110, max=323, avg=157.17, stdev=42.93 00:10:30.966 lat (usec): min=120, max=335, avg=168.22, stdev=43.50 00:10:30.966 clat percentiles (usec): 00:10:30.966 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 129], 00:10:30.966 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 143], 00:10:30.966 | 70.00th=[ 149], 80.00th=[ 204], 90.00th=[ 241], 95.00th=[ 243], 00:10:30.966 | 99.00th=[ 249], 99.50th=[ 251], 99.90th=[ 265], 99.95th=[ 310], 00:10:30.966 | 99.99th=[ 326] 00:10:30.966 bw ( KiB/s): min=10936, max=10936, per=67.68%, avg=10936.00, stdev= 0.00, samples=1 00:10:30.966 iops : min= 2734, max= 2734, avg=2734.00, stdev= 0.00, samples=1 00:10:30.966 lat (usec) : 250=95.20%, 500=4.80% 00:10:30.966 cpu : usr=4.20%, sys=7.60%, ctx=5039, majf=0, minf=2 00:10:30.966 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.966 issued rwts: total=2478,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.966 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.966 job1: (groupid=0, jobs=1): err= 0: pid=1526894: Tue Dec 10 14:12:31 2024 00:10:30.967 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:10:30.967 slat (nsec): min=10191, max=25845, avg=23480.86, stdev=3107.91 00:10:30.967 clat (usec): min=40870, max=42055, avg=41117.59, stdev=367.62 00:10:30.967 lat (usec): min=40894, max=42080, avg=41141.07, stdev=367.66 00:10:30.967 clat percentiles (usec): 00:10:30.967 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:30.967 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:30.967 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:10:30.967 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:30.967 | 99.99th=[42206] 00:10:30.967 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:10:30.967 slat (nsec): min=10657, max=38979, avg=12416.43, stdev=2010.06 00:10:30.967 clat (usec): min=122, max=281, avg=195.97, stdev=34.25 00:10:30.967 lat (usec): min=134, max=296, avg=208.38, stdev=34.47 00:10:30.967 clat percentiles (usec): 00:10:30.967 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 161], 00:10:30.967 | 30.00th=[ 169], 40.00th=[ 182], 50.00th=[ 194], 60.00th=[ 210], 00:10:30.967 | 70.00th=[ 221], 80.00th=[ 233], 90.00th=[ 241], 95.00th=[ 249], 00:10:30.967 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 281], 99.95th=[ 281], 00:10:30.967 | 99.99th=[ 281] 00:10:30.967 bw ( KiB/s): min= 4096, max= 4096, per=25.35%, avg=4096.00, stdev= 0.00, samples=1 00:10:30.967 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:30.967 lat (usec) : 250=91.76%, 500=4.12% 00:10:30.967 lat (msec) : 50=4.12% 00:10:30.967 cpu : usr=0.30%, sys=1.09%, ctx=537, majf=0, minf=1 00:10:30.967 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.967 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.967 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.967 job2: (groupid=0, jobs=1): err= 0: pid=1526917: Tue Dec 10 14:12:31 2024 00:10:30.967 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:10:30.967 slat (nsec): min=10349, max=23985, avg=22324.00, stdev=2695.75 00:10:30.967 clat (usec): min=40902, max=42081, avg=41425.04, stdev=517.90 00:10:30.967 lat (usec): min=40925, max=42091, avg=41447.36, stdev=517.08 00:10:30.967 clat percentiles (usec): 00:10:30.967 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:30.967 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:10:30.967 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:30.967 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:30.967 | 99.99th=[42206] 00:10:30.967 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:10:30.967 slat (nsec): min=10636, max=38352, avg=12636.06, stdev=2160.55 00:10:30.967 clat (usec): min=140, max=389, avg=169.80, stdev=23.21 00:10:30.967 lat (usec): min=151, max=428, avg=182.43, stdev=23.83 00:10:30.967 clat percentiles (usec): 00:10:30.967 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:10:30.967 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:10:30.967 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 239], 00:10:30.967 | 99.00th=[ 241], 99.50th=[ 243], 99.90th=[ 392], 99.95th=[ 392], 00:10:30.967 | 99.99th=[ 392] 00:10:30.967 bw ( KiB/s): min= 4096, max= 4096, per=25.35%, avg=4096.00, stdev= 0.00, samples=1 00:10:30.967 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:30.967 lat (usec) : 250=95.69%, 500=0.19% 00:10:30.967 lat (msec) : 50=4.12% 00:10:30.967 cpu : usr=0.40%, sys=0.50%, ctx=534, majf=0, minf=1 00:10:30.967 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.967 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.967 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.967 job3: (groupid=0, jobs=1): err= 0: pid=1526925: Tue Dec 10 14:12:31 2024 00:10:30.967 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:10:30.967 slat (nsec): min=10068, max=23387, avg=22176.05, stdev=2784.95 00:10:30.967 clat (usec): min=40890, max=42054, avg=41403.95, stdev=500.25 00:10:30.967 lat (usec): min=40912, max=42065, avg=41426.13, stdev=499.39 00:10:30.967 clat percentiles (usec): 00:10:30.967 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:30.967 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:10:30.967 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:30.967 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:30.967 | 99.99th=[42206] 00:10:30.967 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:30.967 slat (nsec): min=9274, max=40922, avg=10447.89, stdev=1724.41 00:10:30.967 clat (usec): min=213, max=418, avg=242.72, stdev= 8.92 00:10:30.967 lat (usec): min=227, max=459, avg=253.17, stdev=10.03 00:10:30.967 clat percentiles (usec): 00:10:30.967 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 239], 20.00th=[ 241], 00:10:30.967 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 243], 60.00th=[ 243], 00:10:30.967 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 245], 95.00th=[ 247], 00:10:30.967 | 99.00th=[ 253], 99.50th=[ 281], 99.90th=[ 420], 99.95th=[ 420], 00:10:30.967 | 99.99th=[ 420] 00:10:30.967 bw ( KiB/s): min= 4096, max= 4096, per=25.35%, avg=4096.00, stdev= 0.00, samples=1 00:10:30.967 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:30.967 lat (usec) : 250=94.18%, 500=1.88% 00:10:30.967 lat (msec) : 50=3.94% 00:10:30.967 cpu : usr=0.20%, sys=0.50%, ctx=533, majf=0, minf=1 00:10:30.967 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.967 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.967 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.967 00:10:30.967 Run status group 0 (all jobs): 00:10:30.967 READ: bw=9.80MiB/s (10.3MB/s), 83.9KiB/s-9902KiB/s (85.9kB/s-10.1MB/s), io=9.93MiB (10.4MB), run=1001-1014msec 00:10:30.967 WRITE: bw=15.8MiB/s (16.5MB/s), 2020KiB/s-9.99MiB/s (2068kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1014msec 00:10:30.967 00:10:30.967 Disk stats (read/write): 00:10:30.967 nvme0n1: ios=2098/2127, merge=0/0, ticks=459/320, in_queue=779, util=86.57% 00:10:30.967 nvme0n2: ios=42/512, merge=0/0, ticks=1687/92, in_queue=1779, util=97.04% 00:10:30.967 nvme0n3: ios=17/512, merge=0/0, ticks=706/87, in_queue=793, util=88.70% 00:10:30.967 nvme0n4: ios=17/512, merge=0/0, ticks=704/122, in_queue=826, util=89.65% 00:10:30.967 14:12:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:30.967 [global] 00:10:30.967 thread=1 00:10:30.967 invalidate=1 00:10:30.967 rw=randwrite 00:10:30.967 time_based=1 00:10:30.967 runtime=1 00:10:30.967 ioengine=libaio 00:10:30.967 direct=1 00:10:30.967 bs=4096 00:10:30.967 iodepth=1 00:10:30.967 norandommap=0 00:10:30.967 numjobs=1 00:10:30.967 00:10:30.967 verify_dump=1 00:10:30.967 verify_backlog=512 00:10:30.967 verify_state_save=0 00:10:30.967 do_verify=1 00:10:30.967 verify=crc32c-intel 00:10:30.967 [job0] 00:10:30.967 filename=/dev/nvme0n1 00:10:30.967 [job1] 00:10:30.967 filename=/dev/nvme0n2 00:10:30.967 [job2] 00:10:30.967 filename=/dev/nvme0n3 00:10:30.967 [job3] 00:10:30.967 filename=/dev/nvme0n4 00:10:30.967 Could not set queue depth (nvme0n1) 00:10:30.967 Could not set queue depth (nvme0n2) 00:10:30.967 Could not set queue depth (nvme0n3) 00:10:30.967 Could not set queue depth (nvme0n4) 00:10:30.967 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.967 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.967 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.967 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.967 fio-3.35 00:10:30.967 Starting 4 threads 00:10:32.342 00:10:32.342 job0: (groupid=0, jobs=1): err= 0: pid=1527348: Tue Dec 10 14:12:32 2024 00:10:32.342 read: IOPS=22, BW=88.5KiB/s (90.6kB/s)(92.0KiB/1040msec) 00:10:32.343 slat (nsec): min=10536, max=23417, avg=13153.00, stdev=3566.80 00:10:32.343 clat (usec): min=40753, max=42009, avg=41283.23, stdev=477.33 00:10:32.343 lat (usec): min=40765, max=42021, avg=41296.38, stdev=476.19 00:10:32.343 clat percentiles (usec): 00:10:32.343 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:32.343 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:32.343 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:32.343 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:32.343 | 99.99th=[42206] 00:10:32.343 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:10:32.343 slat (nsec): min=9523, max=40848, avg=11752.63, stdev=1944.29 00:10:32.343 clat (usec): min=123, max=288, avg=160.02, stdev=19.32 00:10:32.343 lat (usec): min=135, max=329, avg=171.78, stdev=20.17 00:10:32.343 clat percentiles (usec): 00:10:32.343 | 1.00th=[ 128], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 145], 00:10:32.343 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 163], 00:10:32.343 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 194], 00:10:32.343 | 99.00th=[ 208], 99.50th=[ 221], 99.90th=[ 289], 99.95th=[ 289], 00:10:32.343 | 99.99th=[ 289] 00:10:32.343 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:32.343 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:32.343 lat (usec) : 250=95.51%, 500=0.19% 00:10:32.343 lat (msec) : 50=4.30% 00:10:32.343 cpu : usr=0.48%, sys=0.38%, ctx=538, majf=0, minf=1 00:10:32.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.343 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.343 job1: (groupid=0, jobs=1): err= 0: pid=1527365: Tue Dec 10 14:12:32 2024 00:10:32.343 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:10:32.343 slat (nsec): min=20051, max=24608, avg=23766.05, stdev=874.55 00:10:32.343 clat (usec): min=40869, max=42311, avg=41481.36, stdev=537.11 00:10:32.343 lat (usec): min=40893, max=42331, avg=41505.12, stdev=536.74 00:10:32.343 clat percentiles (usec): 00:10:32.343 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:32.343 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:10:32.343 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:32.343 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:32.343 | 99.99th=[42206] 00:10:32.343 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:10:32.343 slat (nsec): min=9851, max=40436, avg=12015.30, stdev=2126.93 00:10:32.343 clat (usec): min=135, max=278, avg=170.60, stdev=12.49 00:10:32.343 lat (usec): min=146, max=319, avg=182.61, stdev=13.31 00:10:32.343 clat percentiles (usec): 00:10:32.343 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:10:32.343 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:10:32.343 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 190], 00:10:32.343 | 99.00th=[ 198], 99.50th=[ 210], 99.90th=[ 281], 99.95th=[ 281], 00:10:32.343 | 99.99th=[ 281] 00:10:32.343 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:32.343 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:32.343 lat (usec) : 250=95.69%, 500=0.19% 00:10:32.343 lat (msec) : 50=4.12% 00:10:32.343 cpu : usr=0.40%, sys=0.89%, ctx=535, majf=0, minf=2 00:10:32.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.343 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.343 job2: (groupid=0, jobs=1): err= 0: pid=1527389: Tue Dec 10 14:12:32 2024 00:10:32.343 read: IOPS=118, BW=474KiB/s (485kB/s)(476KiB/1004msec) 00:10:32.343 slat (nsec): min=8086, max=25847, avg=11496.19, stdev=5533.75 00:10:32.343 clat (usec): min=236, max=41175, avg=7539.94, stdev=15520.14 00:10:32.343 lat (usec): min=259, max=41184, avg=7551.43, stdev=15523.61 00:10:32.343 clat percentiles (usec): 00:10:32.343 | 1.00th=[ 243], 5.00th=[ 302], 10.00th=[ 314], 20.00th=[ 322], 00:10:32.343 | 30.00th=[ 343], 40.00th=[ 392], 50.00th=[ 429], 60.00th=[ 441], 00:10:32.343 | 70.00th=[ 453], 80.00th=[ 529], 90.00th=[41157], 95.00th=[41157], 00:10:32.343 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:32.343 | 99.99th=[41157] 00:10:32.343 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:10:32.343 slat (nsec): min=11052, max=39509, avg=14905.62, stdev=5203.61 00:10:32.343 clat (usec): min=137, max=315, avg=184.67, stdev=26.39 00:10:32.343 lat (usec): min=149, max=355, avg=199.57, stdev=28.11 00:10:32.343 clat percentiles (usec): 00:10:32.343 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 165], 00:10:32.343 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:10:32.343 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 221], 95.00th=[ 239], 00:10:32.343 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 318], 99.95th=[ 318], 00:10:32.343 | 99.99th=[ 318] 00:10:32.343 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:32.343 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:32.343 lat (usec) : 250=79.40%, 500=16.80%, 750=0.48% 00:10:32.343 lat (msec) : 50=3.33% 00:10:32.343 cpu : usr=0.70%, sys=1.00%, ctx=633, majf=0, minf=1 00:10:32.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.343 issued rwts: total=119,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.343 job3: (groupid=0, jobs=1): err= 0: pid=1527396: Tue Dec 10 14:12:32 2024 00:10:32.343 read: IOPS=2441, BW=9766KiB/s (10.0MB/s)(9776KiB/1001msec) 00:10:32.343 slat (nsec): min=8244, max=31711, avg=9121.57, stdev=947.31 00:10:32.343 clat (usec): min=167, max=41180, avg=230.13, stdev=829.13 00:10:32.343 lat (usec): min=176, max=41189, avg=239.25, stdev=829.14 00:10:32.343 clat percentiles (usec): 00:10:32.343 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 196], 00:10:32.343 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:10:32.343 | 70.00th=[ 221], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 255], 00:10:32.343 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 586], 99.95th=[ 750], 00:10:32.343 | 99.99th=[41157] 00:10:32.343 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:32.343 slat (nsec): min=11413, max=40697, avg=12587.21, stdev=1587.47 00:10:32.343 clat (usec): min=114, max=259, avg=143.68, stdev=21.57 00:10:32.343 lat (usec): min=127, max=281, avg=156.26, stdev=21.78 00:10:32.343 clat percentiles (usec): 00:10:32.343 | 1.00th=[ 120], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 129], 00:10:32.343 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 00:10:32.343 | 70.00th=[ 147], 80.00th=[ 157], 90.00th=[ 174], 95.00th=[ 192], 00:10:32.343 | 99.00th=[ 221], 99.50th=[ 231], 99.90th=[ 247], 99.95th=[ 249], 00:10:32.343 | 99.99th=[ 260] 00:10:32.343 bw ( KiB/s): min=11528, max=11528, per=73.18%, avg=11528.00, stdev= 0.00, samples=1 00:10:32.343 iops : min= 2882, max= 2882, avg=2882.00, stdev= 0.00, samples=1 00:10:32.343 lat (usec) : 250=95.82%, 500=4.10%, 750=0.06% 00:10:32.343 lat (msec) : 50=0.02% 00:10:32.343 cpu : usr=2.80%, sys=5.90%, ctx=5005, majf=0, minf=1 00:10:32.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.343 issued rwts: total=2444,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.343 00:10:32.343 Run status group 0 (all jobs): 00:10:32.343 READ: bw=9.79MiB/s (10.3MB/s), 87.2KiB/s-9766KiB/s (89.3kB/s-10.0MB/s), io=10.2MiB (10.7MB), run=1001-1040msec 00:10:32.343 WRITE: bw=15.4MiB/s (16.1MB/s), 1969KiB/s-9.99MiB/s (2016kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1040msec 00:10:32.343 00:10:32.343 Disk stats (read/write): 00:10:32.343 nvme0n1: ios=46/512, merge=0/0, ticks=1418/82, in_queue=1500, util=97.49% 00:10:32.343 nvme0n2: ios=18/512, merge=0/0, ticks=747/84, in_queue=831, util=85.96% 00:10:32.343 nvme0n3: ios=173/512, merge=0/0, ticks=1127/87, in_queue=1214, util=97.27% 00:10:32.343 nvme0n4: ios=2105/2110, merge=0/0, ticks=784/293, in_queue=1077, util=97.25% 00:10:32.343 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:32.343 [global] 00:10:32.343 thread=1 00:10:32.343 invalidate=1 00:10:32.343 rw=write 00:10:32.343 time_based=1 00:10:32.343 runtime=1 00:10:32.343 ioengine=libaio 00:10:32.343 direct=1 00:10:32.343 bs=4096 00:10:32.343 iodepth=128 00:10:32.343 norandommap=0 00:10:32.343 numjobs=1 00:10:32.343 00:10:32.343 verify_dump=1 00:10:32.343 verify_backlog=512 00:10:32.343 verify_state_save=0 00:10:32.343 do_verify=1 00:10:32.343 verify=crc32c-intel 00:10:32.343 [job0] 00:10:32.343 filename=/dev/nvme0n1 00:10:32.343 [job1] 00:10:32.343 filename=/dev/nvme0n2 00:10:32.343 [job2] 00:10:32.343 filename=/dev/nvme0n3 00:10:32.343 [job3] 00:10:32.343 filename=/dev/nvme0n4 00:10:32.343 Could not set queue depth (nvme0n1) 00:10:32.343 Could not set queue depth (nvme0n2) 00:10:32.343 Could not set queue depth (nvme0n3) 00:10:32.343 Could not set queue depth (nvme0n4) 00:10:32.602 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.602 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.602 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.602 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.602 fio-3.35 00:10:32.602 Starting 4 threads 00:10:33.979 00:10:33.979 job0: (groupid=0, jobs=1): err= 0: pid=1527768: Tue Dec 10 14:12:34 2024 00:10:33.979 read: IOPS=6489, BW=25.3MiB/s (26.6MB/s)(25.5MiB/1005msec) 00:10:33.979 slat (nsec): min=1262, max=9525.3k, avg=84544.73, stdev=617714.58 00:10:33.979 clat (usec): min=1389, max=19475, avg=10469.13, stdev=2476.86 00:10:33.979 lat (usec): min=3551, max=24291, avg=10553.68, stdev=2520.87 00:10:33.979 clat percentiles (usec): 00:10:33.979 | 1.00th=[ 4228], 5.00th=[ 7177], 10.00th=[ 8586], 20.00th=[ 9110], 00:10:33.979 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:10:33.979 | 70.00th=[10552], 80.00th=[11731], 90.00th=[14222], 95.00th=[15926], 00:10:33.979 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18744], 99.95th=[18744], 00:10:33.979 | 99.99th=[19530] 00:10:33.979 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:10:33.979 slat (usec): min=2, max=8186, avg=62.37, stdev=329.04 00:10:33.979 clat (usec): min=1560, max=19160, avg=8888.29, stdev=1787.91 00:10:33.979 lat (usec): min=1574, max=19172, avg=8950.65, stdev=1823.53 00:10:33.979 clat percentiles (usec): 00:10:33.979 | 1.00th=[ 3163], 5.00th=[ 4883], 10.00th=[ 6063], 20.00th=[ 7701], 00:10:33.979 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:10:33.979 | 70.00th=[ 9896], 80.00th=[ 9896], 90.00th=[10028], 95.00th=[10028], 00:10:33.979 | 99.00th=[10421], 99.50th=[13304], 99.90th=[18482], 99.95th=[18482], 00:10:33.979 | 99.99th=[19268] 00:10:33.979 bw ( KiB/s): min=24912, max=28336, per=35.95%, avg=26624.00, stdev=2421.13, samples=2 00:10:33.979 iops : min= 6228, max= 7084, avg=6656.00, stdev=605.28, samples=2 00:10:33.979 lat (msec) : 2=0.04%, 4=1.56%, 10=71.54%, 20=26.86% 00:10:33.979 cpu : usr=4.68%, sys=6.97%, ctx=756, majf=0, minf=1 00:10:33.979 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:33.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.979 issued rwts: total=6522,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.979 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.979 job1: (groupid=0, jobs=1): err= 0: pid=1527769: Tue Dec 10 14:12:34 2024 00:10:33.979 read: IOPS=2067, BW=8270KiB/s (8469kB/s)(8328KiB/1007msec) 00:10:33.979 slat (nsec): min=1644, max=12673k, avg=178382.44, stdev=956977.56 00:10:33.979 clat (usec): min=5213, max=59896, avg=22120.50, stdev=10584.88 00:10:33.979 lat (usec): min=6505, max=59922, avg=22298.88, stdev=10656.63 00:10:33.979 clat percentiles (usec): 00:10:33.979 | 1.00th=[ 9241], 5.00th=[15270], 10.00th=[15926], 20.00th=[16319], 00:10:33.979 | 30.00th=[16712], 40.00th=[17171], 50.00th=[17695], 60.00th=[18482], 00:10:33.979 | 70.00th=[19268], 80.00th=[26608], 90.00th=[38536], 95.00th=[47973], 00:10:33.979 | 99.00th=[56886], 99.50th=[56886], 99.90th=[58983], 99.95th=[60031], 00:10:33.979 | 99.99th=[60031] 00:10:33.979 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:10:33.979 slat (usec): min=2, max=35148, avg=239.12, stdev=1259.24 00:10:33.979 clat (usec): min=11060, max=82546, avg=29231.12, stdev=15234.73 00:10:33.979 lat (usec): min=11070, max=82585, avg=29470.24, stdev=15327.76 00:10:33.979 clat percentiles (usec): 00:10:33.979 | 1.00th=[13435], 5.00th=[13566], 10.00th=[14877], 20.00th=[19792], 00:10:33.979 | 30.00th=[21103], 40.00th=[21103], 50.00th=[21365], 60.00th=[21890], 00:10:33.979 | 70.00th=[32900], 80.00th=[43779], 90.00th=[59507], 95.00th=[61080], 00:10:33.979 | 99.00th=[63701], 99.50th=[64750], 99.90th=[64750], 99.95th=[64750], 00:10:33.979 | 99.99th=[82314] 00:10:33.979 bw ( KiB/s): min= 9608, max=10128, per=13.33%, avg=9868.00, stdev=367.70, samples=2 00:10:33.979 iops : min= 2402, max= 2532, avg=2467.00, stdev=91.92, samples=2 00:10:33.979 lat (msec) : 10=0.54%, 20=44.81%, 50=44.31%, 100=10.34% 00:10:33.979 cpu : usr=2.19%, sys=3.18%, ctx=354, majf=0, minf=1 00:10:33.979 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:10:33.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.979 issued rwts: total=2082,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.979 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.979 job2: (groupid=0, jobs=1): err= 0: pid=1527771: Tue Dec 10 14:12:34 2024 00:10:33.979 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:10:33.979 slat (nsec): min=1413, max=12611k, avg=131043.17, stdev=861763.29 00:10:33.979 clat (usec): min=5169, max=36131, avg=14976.85, stdev=5209.78 00:10:33.979 lat (usec): min=5180, max=36142, avg=15107.90, stdev=5272.22 00:10:33.979 clat percentiles (usec): 00:10:33.980 | 1.00th=[ 6521], 5.00th=[10159], 10.00th=[11338], 20.00th=[12125], 00:10:33.980 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13173], 60.00th=[13698], 00:10:33.980 | 70.00th=[14484], 80.00th=[16712], 90.00th=[23462], 95.00th=[27395], 00:10:33.980 | 99.00th=[32375], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:10:33.980 | 99.99th=[35914] 00:10:33.980 write: IOPS=3709, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1011msec); 0 zone resets 00:10:33.980 slat (usec): min=2, max=10869, avg=130.29, stdev=498.03 00:10:33.980 clat (usec): min=1432, max=36440, avg=19595.32, stdev=6849.62 00:10:33.980 lat (usec): min=1459, max=36462, avg=19725.61, stdev=6909.38 00:10:33.980 clat percentiles (usec): 00:10:33.980 | 1.00th=[ 4883], 5.00th=[ 7701], 10.00th=[ 9241], 20.00th=[11994], 00:10:33.980 | 30.00th=[16909], 40.00th=[20841], 50.00th=[21103], 60.00th=[21365], 00:10:33.980 | 70.00th=[23200], 80.00th=[24773], 90.00th=[27395], 95.00th=[30016], 00:10:33.980 | 99.00th=[33817], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:10:33.980 | 99.99th=[36439] 00:10:33.980 bw ( KiB/s): min=13256, max=15728, per=19.57%, avg=14492.00, stdev=1747.97, samples=2 00:10:33.980 iops : min= 3314, max= 3932, avg=3623.00, stdev=436.99, samples=2 00:10:33.980 lat (msec) : 2=0.03%, 4=0.25%, 10=8.32%, 20=51.61%, 50=39.80% 00:10:33.980 cpu : usr=3.07%, sys=4.65%, ctx=457, majf=0, minf=1 00:10:33.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:33.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.980 issued rwts: total=3584,3750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.980 job3: (groupid=0, jobs=1): err= 0: pid=1527772: Tue Dec 10 14:12:34 2024 00:10:33.980 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:10:33.980 slat (nsec): min=1423, max=10533k, avg=98438.89, stdev=709051.76 00:10:33.980 clat (usec): min=3565, max=22203, avg=12119.78, stdev=3021.02 00:10:33.980 lat (usec): min=3571, max=22220, avg=12218.22, stdev=3069.19 00:10:33.980 clat percentiles (usec): 00:10:33.980 | 1.00th=[ 4817], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[10421], 00:10:33.980 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:10:33.980 | 70.00th=[12125], 80.00th=[14353], 90.00th=[17171], 95.00th=[18744], 00:10:33.980 | 99.00th=[20317], 99.50th=[20841], 99.90th=[21103], 99.95th=[21365], 00:10:33.980 | 99.99th=[22152] 00:10:33.980 write: IOPS=5710, BW=22.3MiB/s (23.4MB/s)(22.5MiB/1007msec); 0 zone resets 00:10:33.980 slat (usec): min=2, max=9600, avg=72.23, stdev=310.86 00:10:33.980 clat (usec): min=2321, max=21188, avg=10295.17, stdev=2096.76 00:10:33.980 lat (usec): min=2330, max=21192, avg=10367.39, stdev=2116.36 00:10:33.980 clat percentiles (usec): 00:10:33.980 | 1.00th=[ 3752], 5.00th=[ 5473], 10.00th=[ 6849], 20.00th=[ 9372], 00:10:33.980 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11207], 60.00th=[11207], 00:10:33.980 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11600], 95.00th=[11731], 00:10:33.980 | 99.00th=[12125], 99.50th=[12125], 99.90th=[20841], 99.95th=[21103], 00:10:33.980 | 99.99th=[21103] 00:10:33.980 bw ( KiB/s): min=20544, max=24560, per=30.46%, avg=22552.00, stdev=2839.74, samples=2 00:10:33.980 iops : min= 5136, max= 6140, avg=5638.00, stdev=709.94, samples=2 00:10:33.980 lat (msec) : 4=0.72%, 10=19.83%, 20=78.59%, 50=0.86% 00:10:33.980 cpu : usr=4.47%, sys=6.26%, ctx=729, majf=0, minf=1 00:10:33.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:33.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.980 issued rwts: total=5632,5750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.980 00:10:33.980 Run status group 0 (all jobs): 00:10:33.980 READ: bw=68.9MiB/s (72.2MB/s), 8270KiB/s-25.3MiB/s (8469kB/s-26.6MB/s), io=69.6MiB (73.0MB), run=1005-1011msec 00:10:33.980 WRITE: bw=72.3MiB/s (75.8MB/s), 9.93MiB/s-25.9MiB/s (10.4MB/s-27.1MB/s), io=73.1MiB (76.7MB), run=1005-1011msec 00:10:33.980 00:10:33.980 Disk stats (read/write): 00:10:33.980 nvme0n1: ios=5415/5632, merge=0/0, ticks=54408/48894, in_queue=103302, util=85.87% 00:10:33.980 nvme0n2: ios=1874/2048, merge=0/0, ticks=14144/19758, in_queue=33902, util=97.56% 00:10:33.980 nvme0n3: ios=2993/3072, merge=0/0, ticks=43627/60150, in_queue=103777, util=97.48% 00:10:33.980 nvme0n4: ios=4626/4943, merge=0/0, ticks=54568/49935, in_queue=104503, util=97.46% 00:10:33.980 14:12:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:33.980 [global] 00:10:33.980 thread=1 00:10:33.980 invalidate=1 00:10:33.980 rw=randwrite 00:10:33.980 time_based=1 00:10:33.980 runtime=1 00:10:33.980 ioengine=libaio 00:10:33.980 direct=1 00:10:33.980 bs=4096 00:10:33.980 iodepth=128 00:10:33.980 norandommap=0 00:10:33.980 numjobs=1 00:10:33.980 00:10:33.980 verify_dump=1 00:10:33.980 verify_backlog=512 00:10:33.980 verify_state_save=0 00:10:33.980 do_verify=1 00:10:33.980 verify=crc32c-intel 00:10:33.980 [job0] 00:10:33.980 filename=/dev/nvme0n1 00:10:33.980 [job1] 00:10:33.980 filename=/dev/nvme0n2 00:10:33.980 [job2] 00:10:33.980 filename=/dev/nvme0n3 00:10:33.980 [job3] 00:10:33.980 filename=/dev/nvme0n4 00:10:33.980 Could not set queue depth (nvme0n1) 00:10:33.980 Could not set queue depth (nvme0n2) 00:10:33.980 Could not set queue depth (nvme0n3) 00:10:33.980 Could not set queue depth (nvme0n4) 00:10:34.239 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.239 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.239 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.239 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.239 fio-3.35 00:10:34.239 Starting 4 threads 00:10:35.615 00:10:35.615 job0: (groupid=0, jobs=1): err= 0: pid=1528137: Tue Dec 10 14:12:36 2024 00:10:35.615 read: IOPS=2640, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1011msec) 00:10:35.615 slat (nsec): min=1282, max=23151k, avg=220541.43, stdev=1419127.96 00:10:35.615 clat (msec): min=5, max=110, avg=20.41, stdev=18.34 00:10:35.615 lat (msec): min=5, max=110, avg=20.63, stdev=18.54 00:10:35.615 clat percentiles (msec): 00:10:35.615 | 1.00th=[ 7], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 10], 00:10:35.615 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 14], 60.00th=[ 18], 00:10:35.615 | 70.00th=[ 21], 80.00th=[ 26], 90.00th=[ 39], 95.00th=[ 68], 00:10:35.615 | 99.00th=[ 94], 99.50th=[ 103], 99.90th=[ 111], 99.95th=[ 111], 00:10:35.615 | 99.99th=[ 111] 00:10:35.615 write: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec); 0 zone resets 00:10:35.615 slat (usec): min=2, max=17710, avg=128.25, stdev=657.91 00:10:35.615 clat (usec): min=1565, max=110838, avg=23985.91, stdev=18190.82 00:10:35.615 lat (usec): min=1577, max=110842, avg=24114.15, stdev=18233.39 00:10:35.615 clat percentiles (msec): 00:10:35.615 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:10:35.615 | 30.00th=[ 15], 40.00th=[ 20], 50.00th=[ 23], 60.00th=[ 24], 00:10:35.615 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 52], 95.00th=[ 77], 00:10:35.615 | 99.00th=[ 83], 99.50th=[ 85], 99.90th=[ 90], 99.95th=[ 111], 00:10:35.615 | 99.99th=[ 111] 00:10:35.615 bw ( KiB/s): min= 9152, max=15280, per=19.45%, avg=12216.00, stdev=4333.15, samples=2 00:10:35.615 iops : min= 2288, max= 3820, avg=3054.00, stdev=1083.29, samples=2 00:10:35.615 lat (msec) : 2=0.05%, 4=0.10%, 10=29.71%, 20=23.55%, 50=37.22% 00:10:35.615 lat (msec) : 100=8.97%, 250=0.40% 00:10:35.615 cpu : usr=1.78%, sys=4.55%, ctx=311, majf=0, minf=1 00:10:35.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:35.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.615 issued rwts: total=2670,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.615 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.615 job1: (groupid=0, jobs=1): err= 0: pid=1528138: Tue Dec 10 14:12:36 2024 00:10:35.615 read: IOPS=2656, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1011msec) 00:10:35.615 slat (nsec): min=1496, max=15751k, avg=168555.32, stdev=1134536.75 00:10:35.615 clat (usec): min=5995, max=68541, avg=19331.16, stdev=8071.75 00:10:35.615 lat (usec): min=6007, max=68544, avg=19499.72, stdev=8159.52 00:10:35.615 clat percentiles (usec): 00:10:35.615 | 1.00th=[ 7177], 5.00th=[ 9634], 10.00th=[11731], 20.00th=[15664], 00:10:35.615 | 30.00th=[16450], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:10:35.615 | 70.00th=[20055], 80.00th=[23200], 90.00th=[27919], 95.00th=[33817], 00:10:35.615 | 99.00th=[57934], 99.50th=[65799], 99.90th=[68682], 99.95th=[68682], 00:10:35.615 | 99.99th=[68682] 00:10:35.615 write: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec); 0 zone resets 00:10:35.615 slat (usec): min=2, max=25898, avg=172.23, stdev=972.25 00:10:35.615 clat (usec): min=2985, max=75967, avg=24717.55, stdev=13672.11 00:10:35.615 lat (usec): min=2995, max=77535, avg=24889.78, stdev=13741.85 00:10:35.615 clat percentiles (usec): 00:10:35.615 | 1.00th=[ 4883], 5.00th=[ 7701], 10.00th=[ 8356], 20.00th=[14091], 00:10:35.615 | 30.00th=[18482], 40.00th=[22676], 50.00th=[23987], 60.00th=[24249], 00:10:35.615 | 70.00th=[24511], 80.00th=[27919], 90.00th=[46400], 95.00th=[53216], 00:10:35.615 | 99.00th=[72877], 99.50th=[74974], 99.90th=[76022], 99.95th=[76022], 00:10:35.615 | 99.99th=[76022] 00:10:35.615 bw ( KiB/s): min=12080, max=12480, per=19.55%, avg=12280.00, stdev=282.84, samples=2 00:10:35.615 iops : min= 3020, max= 3120, avg=3070.00, stdev=70.71, samples=2 00:10:35.615 lat (msec) : 4=0.21%, 10=10.44%, 20=38.62%, 50=46.20%, 100=4.53% 00:10:35.615 cpu : usr=1.88%, sys=4.65%, ctx=314, majf=0, minf=1 00:10:35.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:35.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.615 issued rwts: total=2686,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.615 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.615 job2: (groupid=0, jobs=1): err= 0: pid=1528139: Tue Dec 10 14:12:36 2024 00:10:35.615 read: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(10.0MiB/1015msec) 00:10:35.615 slat (nsec): min=1283, max=50845k, avg=175378.30, stdev=1565643.23 00:10:35.615 clat (usec): min=5962, max=81852, avg=20712.44, stdev=11109.80 00:10:35.615 lat (usec): min=5968, max=81877, avg=20887.82, stdev=11222.05 00:10:35.615 clat percentiles (usec): 00:10:35.615 | 1.00th=[ 6980], 5.00th=[10159], 10.00th=[10552], 20.00th=[10683], 00:10:35.615 | 30.00th=[15008], 40.00th=[17433], 50.00th=[17695], 60.00th=[17957], 00:10:35.615 | 70.00th=[21365], 80.00th=[29492], 90.00th=[32900], 95.00th=[44303], 00:10:35.615 | 99.00th=[58983], 99.50th=[58983], 99.90th=[58983], 99.95th=[61080], 00:10:35.615 | 99.99th=[82314] 00:10:35.615 write: IOPS=2685, BW=10.5MiB/s (11.0MB/s)(10.6MiB/1015msec); 0 zone resets 00:10:35.615 slat (usec): min=2, max=13942, avg=196.89, stdev=903.56 00:10:35.615 clat (usec): min=1380, max=101375, avg=27784.16, stdev=18900.89 00:10:35.615 lat (usec): min=1394, max=101386, avg=27981.05, stdev=18991.42 00:10:35.615 clat percentiles (msec): 00:10:35.615 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:10:35.615 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:10:35.615 | 70.00th=[ 26], 80.00th=[ 39], 90.00th=[ 51], 95.00th=[ 69], 00:10:35.615 | 99.00th=[ 96], 99.50th=[ 100], 99.90th=[ 102], 99.95th=[ 102], 00:10:35.615 | 99.99th=[ 102] 00:10:35.615 bw ( KiB/s): min= 8704, max=12080, per=16.55%, avg=10392.00, stdev=2387.19, samples=2 00:10:35.615 iops : min= 2176, max= 3020, avg=2598.00, stdev=596.80, samples=2 00:10:35.615 lat (msec) : 2=0.17%, 4=0.23%, 10=13.17%, 20=32.77%, 50=45.42% 00:10:35.615 lat (msec) : 100=8.13%, 250=0.11% 00:10:35.615 cpu : usr=1.68%, sys=3.75%, ctx=309, majf=0, minf=1 00:10:35.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:35.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.615 issued rwts: total=2560,2726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.615 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.615 job3: (groupid=0, jobs=1): err= 0: pid=1528140: Tue Dec 10 14:12:36 2024 00:10:35.615 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec) 00:10:35.615 slat (nsec): min=1436, max=4278.3k, avg=71618.42, stdev=409349.29 00:10:35.615 clat (usec): min=5607, max=13739, avg=8983.99, stdev=1159.15 00:10:35.615 lat (usec): min=5622, max=14021, avg=9055.61, stdev=1199.68 00:10:35.615 clat percentiles (usec): 00:10:35.615 | 1.00th=[ 6063], 5.00th=[ 6718], 10.00th=[ 7439], 20.00th=[ 8586], 00:10:35.615 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 8979], 00:10:35.615 | 70.00th=[ 9241], 80.00th=[ 9372], 90.00th=[10421], 95.00th=[11207], 00:10:35.615 | 99.00th=[12387], 99.50th=[12649], 99.90th=[13042], 99.95th=[13042], 00:10:35.615 | 99.99th=[13698] 00:10:35.615 write: IOPS=7030, BW=27.5MiB/s (28.8MB/s)(27.6MiB/1005msec); 0 zone resets 00:10:35.615 slat (usec): min=2, max=23793, avg=68.37, stdev=437.41 00:10:35.615 clat (usec): min=4030, max=44442, avg=9511.24, stdev=3038.08 00:10:35.615 lat (usec): min=4043, max=44482, avg=9579.62, stdev=3062.80 00:10:35.615 clat percentiles (usec): 00:10:35.616 | 1.00th=[ 5473], 5.00th=[ 7111], 10.00th=[ 7963], 20.00th=[ 8586], 00:10:35.616 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9110], 00:10:35.616 | 70.00th=[ 9241], 80.00th=[ 9241], 90.00th=[10421], 95.00th=[12256], 00:10:35.616 | 99.00th=[25822], 99.50th=[27395], 99.90th=[27657], 99.95th=[27657], 00:10:35.616 | 99.99th=[44303] 00:10:35.616 bw ( KiB/s): min=27680, max=27832, per=44.20%, avg=27756.00, stdev=107.48, samples=2 00:10:35.616 iops : min= 6920, max= 6958, avg=6939.00, stdev=26.87, samples=2 00:10:35.616 lat (msec) : 10=88.33%, 20=9.83%, 50=1.84% 00:10:35.616 cpu : usr=5.88%, sys=6.57%, ctx=833, majf=0, minf=1 00:10:35.616 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:35.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.616 issued rwts: total=6656,7066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.616 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.616 00:10:35.616 Run status group 0 (all jobs): 00:10:35.616 READ: bw=56.1MiB/s (58.8MB/s), 9.85MiB/s-25.9MiB/s (10.3MB/s-27.1MB/s), io=56.9MiB (59.7MB), run=1005-1015msec 00:10:35.616 WRITE: bw=61.3MiB/s (64.3MB/s), 10.5MiB/s-27.5MiB/s (11.0MB/s-28.8MB/s), io=62.2MiB (65.3MB), run=1005-1015msec 00:10:35.616 00:10:35.616 Disk stats (read/write): 00:10:35.616 nvme0n1: ios=2098/2559, merge=0/0, ticks=33503/58252, in_queue=91755, util=80.66% 00:10:35.616 nvme0n2: ios=2072/2272, merge=0/0, ticks=39581/58405, in_queue=97986, util=99.90% 00:10:35.616 nvme0n3: ios=2048/2071, merge=0/0, ticks=45204/50845, in_queue=96049, util=86.92% 00:10:35.616 nvme0n4: ios=5141/5560, merge=0/0, ticks=23683/28074, in_queue=51757, util=99.78% 00:10:35.616 14:12:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:35.616 14:12:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1528373 00:10:35.616 14:12:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:35.616 14:12:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:35.616 [global] 00:10:35.616 thread=1 00:10:35.616 invalidate=1 00:10:35.616 rw=read 00:10:35.616 time_based=1 00:10:35.616 runtime=10 00:10:35.616 ioengine=libaio 00:10:35.616 direct=1 00:10:35.616 bs=4096 00:10:35.616 iodepth=1 00:10:35.616 norandommap=1 00:10:35.616 numjobs=1 00:10:35.616 00:10:35.616 [job0] 00:10:35.616 filename=/dev/nvme0n1 00:10:35.616 [job1] 00:10:35.616 filename=/dev/nvme0n2 00:10:35.616 [job2] 00:10:35.616 filename=/dev/nvme0n3 00:10:35.616 [job3] 00:10:35.616 filename=/dev/nvme0n4 00:10:35.616 Could not set queue depth (nvme0n1) 00:10:35.616 Could not set queue depth (nvme0n2) 00:10:35.616 Could not set queue depth (nvme0n3) 00:10:35.616 Could not set queue depth (nvme0n4) 00:10:35.874 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.874 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.874 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.874 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.874 fio-3.35 00:10:35.874 Starting 4 threads 00:10:38.406 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:38.665 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=39362560, buflen=4096 00:10:38.665 fio: pid=1528517, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:38.665 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:38.924 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=43229184, buflen=4096 00:10:38.924 fio: pid=1528516, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:38.924 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.924 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:39.182 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=45670400, buflen=4096 00:10:39.182 fio: pid=1528513, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.182 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.182 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:39.442 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=438272, buflen=4096 00:10:39.442 fio: pid=1528514, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.442 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.442 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:39.442 00:10:39.442 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1528513: Tue Dec 10 14:12:39 2024 00:10:39.442 read: IOPS=3612, BW=14.1MiB/s (14.8MB/s)(43.6MiB/3087msec) 00:10:39.442 slat (usec): min=6, max=21994, avg=12.39, stdev=280.50 00:10:39.442 clat (usec): min=152, max=986, avg=261.06, stdev=68.00 00:10:39.442 lat (usec): min=162, max=22460, avg=273.46, stdev=292.15 00:10:39.442 clat percentiles (usec): 00:10:39.442 | 1.00th=[ 167], 5.00th=[ 182], 10.00th=[ 192], 20.00th=[ 225], 00:10:39.442 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 260], 00:10:39.442 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 469], 00:10:39.442 | 99.00th=[ 506], 99.50th=[ 510], 99.90th=[ 529], 99.95th=[ 635], 00:10:39.442 | 99.99th=[ 742] 00:10:39.442 bw ( KiB/s): min=11784, max=17218, per=38.09%, avg=14472.33, stdev=1892.58, samples=6 00:10:39.442 iops : min= 2946, max= 4304, avg=3618.00, stdev=473.00, samples=6 00:10:39.442 lat (usec) : 250=42.66%, 500=55.88%, 750=1.44%, 1000=0.01% 00:10:39.442 cpu : usr=1.23%, sys=3.92%, ctx=11156, majf=0, minf=1 00:10:39.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.442 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.442 issued rwts: total=11151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.442 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1528514: Tue Dec 10 14:12:39 2024 00:10:39.442 read: IOPS=32, BW=129KiB/s (132kB/s)(428KiB/3308msec) 00:10:39.442 slat (usec): min=6, max=13812, avg=337.98, stdev=1924.88 00:10:39.442 clat (usec): min=251, max=43081, avg=30366.90, stdev=17973.15 00:10:39.442 lat (usec): min=264, max=54933, avg=30707.81, stdev=18279.39 00:10:39.442 clat percentiles (usec): 00:10:39.442 | 1.00th=[ 258], 5.00th=[ 285], 10.00th=[ 318], 20.00th=[ 338], 00:10:39.442 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:39.442 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:39.442 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:39.442 | 99.99th=[43254] 00:10:39.442 bw ( KiB/s): min= 96, max= 312, per=0.35%, avg=133.67, stdev=87.42, samples=6 00:10:39.442 iops : min= 24, max= 78, avg=33.33, stdev=21.90, samples=6 00:10:39.442 lat (usec) : 500=25.93% 00:10:39.442 lat (msec) : 50=73.15% 00:10:39.442 cpu : usr=0.15%, sys=0.00%, ctx=111, majf=0, minf=2 00:10:39.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.442 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.442 issued rwts: total=108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.442 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1528516: Tue Dec 10 14:12:39 2024 00:10:39.442 read: IOPS=3657, BW=14.3MiB/s (15.0MB/s)(41.2MiB/2886msec) 00:10:39.442 slat (nsec): min=6915, max=53703, avg=8202.56, stdev=1332.82 00:10:39.442 clat (usec): min=182, max=533, avg=261.26, stdev=38.76 00:10:39.442 lat (usec): min=190, max=549, avg=269.46, stdev=38.82 00:10:39.442 clat percentiles (usec): 00:10:39.442 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 231], 00:10:39.442 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 260], 60.00th=[ 265], 00:10:39.442 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 322], 00:10:39.442 | 99.00th=[ 424], 99.50th=[ 437], 99.90th=[ 457], 99.95th=[ 465], 00:10:39.442 | 99.99th=[ 498] 00:10:39.442 bw ( KiB/s): min=12952, max=16784, per=39.21%, avg=14896.00, stdev=1496.78, samples=5 00:10:39.442 iops : min= 3238, max= 4196, avg=3724.00, stdev=374.20, samples=5 00:10:39.442 lat (usec) : 250=40.23%, 500=59.75%, 750=0.01% 00:10:39.442 cpu : usr=1.73%, sys=6.24%, ctx=10557, majf=0, minf=2 00:10:39.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.442 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.442 issued rwts: total=10555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.442 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1528517: Tue Dec 10 14:12:39 2024 00:10:39.442 read: IOPS=3574, BW=14.0MiB/s (14.6MB/s)(37.5MiB/2689msec) 00:10:39.442 slat (nsec): min=5951, max=43957, avg=7767.02, stdev=1325.22 00:10:39.442 clat (usec): min=186, max=666, avg=267.80, stdev=47.11 00:10:39.442 lat (usec): min=194, max=674, avg=275.57, stdev=47.24 00:10:39.442 clat percentiles (usec): 00:10:39.442 | 1.00th=[ 212], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 243], 00:10:39.442 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:10:39.442 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 334], 00:10:39.442 | 99.00th=[ 486], 99.50th=[ 494], 99.90th=[ 523], 99.95th=[ 537], 00:10:39.442 | 99.99th=[ 668] 00:10:39.442 bw ( KiB/s): min=13920, max=15512, per=38.04%, avg=14454.40, stdev=613.54, samples=5 00:10:39.442 iops : min= 3480, max= 3878, avg=3613.60, stdev=153.38, samples=5 00:10:39.442 lat (usec) : 250=33.87%, 500=65.82%, 750=0.30% 00:10:39.442 cpu : usr=1.53%, sys=4.06%, ctx=9611, majf=0, minf=2 00:10:39.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.442 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.442 issued rwts: total=9611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.442 00:10:39.442 Run status group 0 (all jobs): 00:10:39.442 READ: bw=37.1MiB/s (38.9MB/s), 129KiB/s-14.3MiB/s (132kB/s-15.0MB/s), io=123MiB (129MB), run=2689-3308msec 00:10:39.442 00:10:39.442 Disk stats (read/write): 00:10:39.442 nvme0n1: ios=11150/0, merge=0/0, ticks=2835/0, in_queue=2835, util=93.10% 00:10:39.442 nvme0n2: ios=102/0, merge=0/0, ticks=3045/0, in_queue=3045, util=94.70% 00:10:39.442 nvme0n3: ios=10392/0, merge=0/0, ticks=2597/0, in_queue=2597, util=96.32% 00:10:39.442 nvme0n4: ios=9283/0, merge=0/0, ticks=2430/0, in_queue=2430, util=96.40% 00:10:39.442 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.442 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:39.701 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.701 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:39.959 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.959 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:40.218 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.218 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:40.476 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:40.476 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1528373 00:10:40.476 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:40.476 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.476 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.477 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:40.477 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:40.477 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.477 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:40.477 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.477 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:40.477 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:40.477 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:40.477 nvmf hotplug test: fio failed as expected 00:10:40.477 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.735 rmmod nvme_tcp 00:10:40.735 rmmod nvme_fabrics 00:10:40.735 rmmod nvme_keyring 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1525462 ']' 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1525462 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1525462 ']' 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1525462 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.735 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1525462 00:10:40.995 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:40.995 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:40.995 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1525462' 00:10:40.995 killing process with pid 1525462 00:10:40.995 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1525462 00:10:40.995 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1525462 00:10:40.995 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:40.995 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:40.995 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:40.995 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:40.995 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:40.995 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:40.995 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:40.995 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.995 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:40.995 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.995 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.995 14:12:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.531 00:10:43.531 real 0m28.508s 00:10:43.531 user 1m51.023s 00:10:43.531 sys 0m9.282s 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.531 ************************************ 00:10:43.531 END TEST nvmf_fio_target 00:10:43.531 ************************************ 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.531 ************************************ 00:10:43.531 START TEST nvmf_bdevio 00:10:43.531 ************************************ 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:43.531 * Looking for test storage... 00:10:43.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.531 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:43.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.532 --rc genhtml_branch_coverage=1 00:10:43.532 --rc genhtml_function_coverage=1 00:10:43.532 --rc genhtml_legend=1 00:10:43.532 --rc geninfo_all_blocks=1 00:10:43.532 --rc geninfo_unexecuted_blocks=1 00:10:43.532 00:10:43.532 ' 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:43.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.532 --rc genhtml_branch_coverage=1 00:10:43.532 --rc genhtml_function_coverage=1 00:10:43.532 --rc genhtml_legend=1 00:10:43.532 --rc geninfo_all_blocks=1 00:10:43.532 --rc geninfo_unexecuted_blocks=1 00:10:43.532 00:10:43.532 ' 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:43.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.532 --rc genhtml_branch_coverage=1 00:10:43.532 --rc genhtml_function_coverage=1 00:10:43.532 --rc genhtml_legend=1 00:10:43.532 --rc geninfo_all_blocks=1 00:10:43.532 --rc geninfo_unexecuted_blocks=1 00:10:43.532 00:10:43.532 ' 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:43.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.532 --rc genhtml_branch_coverage=1 00:10:43.532 --rc genhtml_function_coverage=1 00:10:43.532 --rc genhtml_legend=1 00:10:43.532 --rc geninfo_all_blocks=1 00:10:43.532 --rc geninfo_unexecuted_blocks=1 00:10:43.532 00:10:43.532 ' 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.532 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.532 14:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.104 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.104 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.104 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.104 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.104 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.104 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.104 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:50.105 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:50.105 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:50.105 Found net devices under 0000:af:00.0: cvl_0_0 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:50.105 Found net devices under 0000:af:00.1: cvl_0_1 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:10:50.105 00:10:50.105 --- 10.0.0.2 ping statistics --- 00:10:50.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.105 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:10:50.105 00:10:50.105 --- 10.0.0.1 ping statistics --- 00:10:50.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.105 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.105 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1533234 00:10:50.106 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:50.106 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1533234 00:10:50.106 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1533234 ']' 00:10:50.106 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.106 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.106 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.106 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.106 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.365 [2024-12-10 14:12:50.891436] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:10:50.365 [2024-12-10 14:12:50.891482] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.365 [2024-12-10 14:12:50.978386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.365 [2024-12-10 14:12:51.018543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.365 [2024-12-10 14:12:51.018583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.365 [2024-12-10 14:12:51.018590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.365 [2024-12-10 14:12:51.018595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.365 [2024-12-10 14:12:51.018600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.365 [2024-12-10 14:12:51.020282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:50.365 [2024-12-10 14:12:51.020388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:50.365 [2024-12-10 14:12:51.020499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.365 [2024-12-10 14:12:51.020500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:51.301 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.301 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:51.301 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:51.301 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:51.301 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.301 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.302 [2024-12-10 14:12:51.767224] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.302 Malloc0 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.302 [2024-12-10 14:12:51.836291] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:51.302 { 00:10:51.302 "params": { 00:10:51.302 "name": "Nvme$subsystem", 00:10:51.302 "trtype": "$TEST_TRANSPORT", 00:10:51.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:51.302 "adrfam": "ipv4", 00:10:51.302 "trsvcid": "$NVMF_PORT", 00:10:51.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:51.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:51.302 "hdgst": ${hdgst:-false}, 00:10:51.302 "ddgst": ${ddgst:-false} 00:10:51.302 }, 00:10:51.302 "method": "bdev_nvme_attach_controller" 00:10:51.302 } 00:10:51.302 EOF 00:10:51.302 )") 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:51.302 14:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:51.302 "params": { 00:10:51.302 "name": "Nvme1", 00:10:51.302 "trtype": "tcp", 00:10:51.302 "traddr": "10.0.0.2", 00:10:51.302 "adrfam": "ipv4", 00:10:51.302 "trsvcid": "4420", 00:10:51.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:51.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:51.302 "hdgst": false, 00:10:51.302 "ddgst": false 00:10:51.302 }, 00:10:51.302 "method": "bdev_nvme_attach_controller" 00:10:51.302 }' 00:10:51.302 [2024-12-10 14:12:51.888117] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:10:51.302 [2024-12-10 14:12:51.888161] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533482 ] 00:10:51.302 [2024-12-10 14:12:51.969920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:51.302 [2024-12-10 14:12:52.011834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.302 [2024-12-10 14:12:52.011941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.302 [2024-12-10 14:12:52.011942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.870 I/O targets: 00:10:51.870 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:51.870 00:10:51.870 00:10:51.870 CUnit - A unit testing framework for C - Version 2.1-3 00:10:51.870 http://cunit.sourceforge.net/ 00:10:51.870 00:10:51.870 00:10:51.870 Suite: bdevio tests on: Nvme1n1 00:10:51.870 Test: blockdev write read block ...passed 00:10:51.870 Test: blockdev write zeroes read block ...passed 00:10:51.870 Test: blockdev write zeroes read no split ...passed 00:10:51.870 Test: blockdev write zeroes read split ...passed 00:10:51.870 Test: blockdev write zeroes read split partial ...passed 00:10:51.870 Test: blockdev reset ...[2024-12-10 14:12:52.438533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:51.870 [2024-12-10 14:12:52.438596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18398b0 (9): Bad file descriptor 00:10:51.870 [2024-12-10 14:12:52.454204] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:51.870 passed 00:10:51.870 Test: blockdev write read 8 blocks ...passed 00:10:51.870 Test: blockdev write read size > 128k ...passed 00:10:51.870 Test: blockdev write read invalid size ...passed 00:10:51.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:51.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:51.870 Test: blockdev write read max offset ...passed 00:10:51.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:52.129 Test: blockdev writev readv 8 blocks ...passed 00:10:52.129 Test: blockdev writev readv 30 x 1block ...passed 00:10:52.129 Test: blockdev writev readv block ...passed 00:10:52.129 Test: blockdev writev readv size > 128k ...passed 00:10:52.129 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:52.129 Test: blockdev comparev and writev ...[2024-12-10 14:12:52.703829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.129 [2024-12-10 14:12:52.703857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:52.129 [2024-12-10 14:12:52.703872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.129 [2024-12-10 14:12:52.703880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:52.129 [2024-12-10 14:12:52.704113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.129 [2024-12-10 14:12:52.704127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:52.129 [2024-12-10 14:12:52.704138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.129 [2024-12-10 14:12:52.704145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:52.129 [2024-12-10 14:12:52.704378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.129 [2024-12-10 14:12:52.704388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:52.129 [2024-12-10 14:12:52.704399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.129 [2024-12-10 14:12:52.704406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:52.129 [2024-12-10 14:12:52.704646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.129 [2024-12-10 14:12:52.704655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:52.129 [2024-12-10 14:12:52.704667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.129 [2024-12-10 14:12:52.704674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:52.129 passed 00:10:52.129 Test: blockdev nvme passthru rw ...passed 00:10:52.129 Test: blockdev nvme passthru vendor specific ...[2024-12-10 14:12:52.788559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.129 [2024-12-10 14:12:52.788573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:52.129 [2024-12-10 14:12:52.788673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.129 [2024-12-10 14:12:52.788683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:52.129 [2024-12-10 14:12:52.788779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.129 [2024-12-10 14:12:52.788788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:52.129 [2024-12-10 14:12:52.788884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.129 [2024-12-10 14:12:52.788892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:52.129 passed 00:10:52.129 Test: blockdev nvme admin passthru ...passed 00:10:52.129 Test: blockdev copy ...passed 00:10:52.129 00:10:52.129 Run Summary: Type Total Ran Passed Failed Inactive 00:10:52.129 suites 1 1 n/a 0 0 00:10:52.129 tests 23 23 23 0 0 00:10:52.129 asserts 152 152 152 0 n/a 00:10:52.129 00:10:52.129 Elapsed time = 1.034 seconds 00:10:52.389 14:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.389 14:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.389 14:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.389 14:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.389 14:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:52.389 14:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:52.389 14:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.389 14:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:52.389 14:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:52.389 14:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:52.389 14:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.389 14:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:52.389 rmmod nvme_tcp 00:10:52.389 rmmod nvme_fabrics 00:10:52.389 rmmod nvme_keyring 00:10:52.389 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.389 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:52.389 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:52.389 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1533234 ']' 00:10:52.389 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1533234 00:10:52.389 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1533234 ']' 00:10:52.389 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1533234 00:10:52.390 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:52.390 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.390 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1533234 00:10:52.390 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:52.390 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:52.390 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1533234' 00:10:52.390 killing process with pid 1533234 00:10:52.390 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1533234 00:10:52.390 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1533234 00:10:52.649 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:52.649 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:52.649 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:52.649 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:52.649 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:52.649 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:52.649 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:52.649 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:52.649 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:52.649 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.649 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.649 14:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:55.184 00:10:55.184 real 0m11.561s 00:10:55.184 user 0m13.456s 00:10:55.184 sys 0m5.670s 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.184 ************************************ 00:10:55.184 END TEST nvmf_bdevio 00:10:55.184 ************************************ 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:55.184 00:10:55.184 real 4m49.894s 00:10:55.184 user 10m36.227s 00:10:55.184 sys 1m45.361s 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:55.184 ************************************ 00:10:55.184 END TEST nvmf_target_core 00:10:55.184 ************************************ 00:10:55.184 14:12:55 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:55.184 14:12:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:55.184 14:12:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.184 14:12:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:55.184 ************************************ 00:10:55.184 START TEST nvmf_target_extra 00:10:55.184 ************************************ 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:55.184 * Looking for test storage... 00:10:55.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.184 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:55.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.185 --rc genhtml_branch_coverage=1 00:10:55.185 --rc genhtml_function_coverage=1 00:10:55.185 --rc genhtml_legend=1 00:10:55.185 --rc geninfo_all_blocks=1 00:10:55.185 --rc geninfo_unexecuted_blocks=1 00:10:55.185 00:10:55.185 ' 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:55.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.185 --rc genhtml_branch_coverage=1 00:10:55.185 --rc genhtml_function_coverage=1 00:10:55.185 --rc genhtml_legend=1 00:10:55.185 --rc geninfo_all_blocks=1 00:10:55.185 --rc geninfo_unexecuted_blocks=1 00:10:55.185 00:10:55.185 ' 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:55.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.185 --rc genhtml_branch_coverage=1 00:10:55.185 --rc genhtml_function_coverage=1 00:10:55.185 --rc genhtml_legend=1 00:10:55.185 --rc geninfo_all_blocks=1 00:10:55.185 --rc geninfo_unexecuted_blocks=1 00:10:55.185 00:10:55.185 ' 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:55.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.185 --rc genhtml_branch_coverage=1 00:10:55.185 --rc genhtml_function_coverage=1 00:10:55.185 --rc genhtml_legend=1 00:10:55.185 --rc geninfo_all_blocks=1 00:10:55.185 --rc geninfo_unexecuted_blocks=1 00:10:55.185 00:10:55.185 ' 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:55.185 ************************************ 00:10:55.185 START TEST nvmf_example 00:10:55.185 ************************************ 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:55.185 * Looking for test storage... 00:10:55.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:55.185 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:55.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.186 --rc genhtml_branch_coverage=1 00:10:55.186 --rc genhtml_function_coverage=1 00:10:55.186 --rc genhtml_legend=1 00:10:55.186 --rc geninfo_all_blocks=1 00:10:55.186 --rc geninfo_unexecuted_blocks=1 00:10:55.186 00:10:55.186 ' 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:55.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.186 --rc genhtml_branch_coverage=1 00:10:55.186 --rc genhtml_function_coverage=1 00:10:55.186 --rc genhtml_legend=1 00:10:55.186 --rc geninfo_all_blocks=1 00:10:55.186 --rc geninfo_unexecuted_blocks=1 00:10:55.186 00:10:55.186 ' 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:55.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.186 --rc genhtml_branch_coverage=1 00:10:55.186 --rc genhtml_function_coverage=1 00:10:55.186 --rc genhtml_legend=1 00:10:55.186 --rc geninfo_all_blocks=1 00:10:55.186 --rc geninfo_unexecuted_blocks=1 00:10:55.186 00:10:55.186 ' 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:55.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.186 --rc genhtml_branch_coverage=1 00:10:55.186 --rc genhtml_function_coverage=1 00:10:55.186 --rc genhtml_legend=1 00:10:55.186 --rc geninfo_all_blocks=1 00:10:55.186 --rc geninfo_unexecuted_blocks=1 00:10:55.186 00:10:55.186 ' 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:55.186 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:55.445 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:55.445 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:55.445 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:55.445 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:55.445 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:55.445 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.445 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:55.445 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:55.445 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.445 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:55.445 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:55.445 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:55.445 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.445 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.445 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.445 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:55.446 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:55.446 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:55.446 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:02.016 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:02.016 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:02.016 Found net devices under 0000:af:00.0: cvl_0_0 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:02.016 Found net devices under 0000:af:00.1: cvl_0_1 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:02.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:11:02.016 00:11:02.016 --- 10.0.0.2 ping statistics --- 00:11:02.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.016 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:11:02.016 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:11:02.016 00:11:02.016 --- 10.0.0.1 ping statistics --- 00:11:02.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.017 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:11:02.017 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.017 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:02.017 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:02.017 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.017 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:02.017 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:02.017 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.017 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:02.017 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:02.275 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:02.275 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:02.275 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:02.275 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:02.275 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:02.275 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:02.275 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1537768 00:11:02.275 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:02.275 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:02.275 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1537768 00:11:02.275 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1537768 ']' 00:11:02.275 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.275 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.275 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.275 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.275 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.212 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:03.213 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:13.285 Initializing NVMe Controllers 00:11:13.285 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:13.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:13.285 Initialization complete. Launching workers. 00:11:13.285 ======================================================== 00:11:13.285 Latency(us) 00:11:13.285 Device Information : IOPS MiB/s Average min max 00:11:13.285 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18248.57 71.28 3506.56 523.27 15569.36 00:11:13.285 ======================================================== 00:11:13.285 Total : 18248.57 71.28 3506.56 523.27 15569.36 00:11:13.285 00:11:13.285 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:13.285 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:13.285 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:13.285 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:13.285 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:13.285 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:13.285 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:13.285 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:13.285 rmmod nvme_tcp 00:11:13.285 rmmod nvme_fabrics 00:11:13.285 rmmod nvme_keyring 00:11:13.285 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:13.285 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:13.285 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:13.285 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1537768 ']' 00:11:13.285 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1537768 00:11:13.285 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1537768 ']' 00:11:13.285 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1537768 00:11:13.285 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:13.285 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.285 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1537768 00:11:13.545 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:13.545 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:13.545 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1537768' 00:11:13.545 killing process with pid 1537768 00:11:13.545 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1537768 00:11:13.545 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1537768 00:11:13.545 nvmf threads initialize successfully 00:11:13.545 bdev subsystem init successfully 00:11:13.545 created a nvmf target service 00:11:13.545 create targets's poll groups done 00:11:13.545 all subsystems of target started 00:11:13.545 nvmf target is running 00:11:13.545 all subsystems of target stopped 00:11:13.545 destroy targets's poll groups done 00:11:13.545 destroyed the nvmf target service 00:11:13.545 bdev subsystem finish successfully 00:11:13.545 nvmf threads destroy successfully 00:11:13.545 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:13.545 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:13.545 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:13.545 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:13.545 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:13.545 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:13.545 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:13.545 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:13.545 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:13.545 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.545 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.545 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.085 00:11:16.085 real 0m20.631s 00:11:16.085 user 0m46.025s 00:11:16.085 sys 0m6.685s 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.085 ************************************ 00:11:16.085 END TEST nvmf_example 00:11:16.085 ************************************ 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:16.085 ************************************ 00:11:16.085 START TEST nvmf_filesystem 00:11:16.085 ************************************ 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:16.085 * Looking for test storage... 00:11:16.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:16.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.085 --rc genhtml_branch_coverage=1 00:11:16.085 --rc genhtml_function_coverage=1 00:11:16.085 --rc genhtml_legend=1 00:11:16.085 --rc geninfo_all_blocks=1 00:11:16.085 --rc geninfo_unexecuted_blocks=1 00:11:16.085 00:11:16.085 ' 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:16.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.085 --rc genhtml_branch_coverage=1 00:11:16.085 --rc genhtml_function_coverage=1 00:11:16.085 --rc genhtml_legend=1 00:11:16.085 --rc geninfo_all_blocks=1 00:11:16.085 --rc geninfo_unexecuted_blocks=1 00:11:16.085 00:11:16.085 ' 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:16.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.085 --rc genhtml_branch_coverage=1 00:11:16.085 --rc genhtml_function_coverage=1 00:11:16.085 --rc genhtml_legend=1 00:11:16.085 --rc geninfo_all_blocks=1 00:11:16.085 --rc geninfo_unexecuted_blocks=1 00:11:16.085 00:11:16.085 ' 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:16.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.085 --rc genhtml_branch_coverage=1 00:11:16.085 --rc genhtml_function_coverage=1 00:11:16.085 --rc genhtml_legend=1 00:11:16.085 --rc geninfo_all_blocks=1 00:11:16.085 --rc geninfo_unexecuted_blocks=1 00:11:16.085 00:11:16.085 ' 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:16.085 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:16.086 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:16.086 #define SPDK_CONFIG_H 00:11:16.086 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:16.086 #define SPDK_CONFIG_APPS 1 00:11:16.086 #define SPDK_CONFIG_ARCH native 00:11:16.086 #undef SPDK_CONFIG_ASAN 00:11:16.086 #undef SPDK_CONFIG_AVAHI 00:11:16.086 #undef SPDK_CONFIG_CET 00:11:16.086 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:16.086 #define SPDK_CONFIG_COVERAGE 1 00:11:16.086 #define SPDK_CONFIG_CROSS_PREFIX 00:11:16.087 #undef SPDK_CONFIG_CRYPTO 00:11:16.087 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:16.087 #undef SPDK_CONFIG_CUSTOMOCF 00:11:16.087 #undef SPDK_CONFIG_DAOS 00:11:16.087 #define SPDK_CONFIG_DAOS_DIR 00:11:16.087 #define SPDK_CONFIG_DEBUG 1 00:11:16.087 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:16.087 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:16.087 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:16.087 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:16.087 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:16.087 #undef SPDK_CONFIG_DPDK_UADK 00:11:16.087 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:16.087 #define SPDK_CONFIG_EXAMPLES 1 00:11:16.087 #undef SPDK_CONFIG_FC 00:11:16.087 #define SPDK_CONFIG_FC_PATH 00:11:16.087 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:16.087 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:16.087 #define SPDK_CONFIG_FSDEV 1 00:11:16.087 #undef SPDK_CONFIG_FUSE 00:11:16.087 #undef SPDK_CONFIG_FUZZER 00:11:16.087 #define SPDK_CONFIG_FUZZER_LIB 00:11:16.087 #undef SPDK_CONFIG_GOLANG 00:11:16.087 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:16.087 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:16.087 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:16.087 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:16.087 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:16.087 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:16.087 #undef SPDK_CONFIG_HAVE_LZ4 00:11:16.087 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:16.087 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:16.087 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:16.087 #define SPDK_CONFIG_IDXD 1 00:11:16.087 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:16.087 #undef SPDK_CONFIG_IPSEC_MB 00:11:16.087 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:16.087 #define SPDK_CONFIG_ISAL 1 00:11:16.087 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:16.087 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:16.087 #define SPDK_CONFIG_LIBDIR 00:11:16.087 #undef SPDK_CONFIG_LTO 00:11:16.087 #define SPDK_CONFIG_MAX_LCORES 128 00:11:16.087 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:16.087 #define SPDK_CONFIG_NVME_CUSE 1 00:11:16.087 #undef SPDK_CONFIG_OCF 00:11:16.087 #define SPDK_CONFIG_OCF_PATH 00:11:16.087 #define SPDK_CONFIG_OPENSSL_PATH 00:11:16.087 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:16.087 #define SPDK_CONFIG_PGO_DIR 00:11:16.087 #undef SPDK_CONFIG_PGO_USE 00:11:16.087 #define SPDK_CONFIG_PREFIX /usr/local 00:11:16.087 #undef SPDK_CONFIG_RAID5F 00:11:16.087 #undef SPDK_CONFIG_RBD 00:11:16.087 #define SPDK_CONFIG_RDMA 1 00:11:16.087 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:16.087 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:16.087 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:16.087 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:16.087 #define SPDK_CONFIG_SHARED 1 00:11:16.087 #undef SPDK_CONFIG_SMA 00:11:16.087 #define SPDK_CONFIG_TESTS 1 00:11:16.087 #undef SPDK_CONFIG_TSAN 00:11:16.087 #define SPDK_CONFIG_UBLK 1 00:11:16.087 #define SPDK_CONFIG_UBSAN 1 00:11:16.087 #undef SPDK_CONFIG_UNIT_TESTS 00:11:16.087 #undef SPDK_CONFIG_URING 00:11:16.087 #define SPDK_CONFIG_URING_PATH 00:11:16.087 #undef SPDK_CONFIG_URING_ZNS 00:11:16.087 #undef SPDK_CONFIG_USDT 00:11:16.087 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:16.087 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:16.087 #define SPDK_CONFIG_VFIO_USER 1 00:11:16.087 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:16.087 #define SPDK_CONFIG_VHOST 1 00:11:16.087 #define SPDK_CONFIG_VIRTIO 1 00:11:16.087 #undef SPDK_CONFIG_VTUNE 00:11:16.087 #define SPDK_CONFIG_VTUNE_DIR 00:11:16.087 #define SPDK_CONFIG_WERROR 1 00:11:16.087 #define SPDK_CONFIG_WPDK_DIR 00:11:16.087 #undef SPDK_CONFIG_XNVME 00:11:16.087 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:16.087 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:16.088 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:16.089 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1540153 ]] 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1540153 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ERdOPM 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ERdOPM/tests/target /tmp/spdk.ERdOPM 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=93089697792 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100837199872 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7747502080 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50408566784 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418597888 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=20144234496 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20167442432 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23207936 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50418241536 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=360448 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=10083704832 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=10083717120 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:16.090 * Looking for test storage... 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=93089697792 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9962094592 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:16.090 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:16.091 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:16.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.351 --rc genhtml_branch_coverage=1 00:11:16.351 --rc genhtml_function_coverage=1 00:11:16.351 --rc genhtml_legend=1 00:11:16.351 --rc geninfo_all_blocks=1 00:11:16.351 --rc geninfo_unexecuted_blocks=1 00:11:16.351 00:11:16.351 ' 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:16.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.351 --rc genhtml_branch_coverage=1 00:11:16.351 --rc genhtml_function_coverage=1 00:11:16.351 --rc genhtml_legend=1 00:11:16.351 --rc geninfo_all_blocks=1 00:11:16.351 --rc geninfo_unexecuted_blocks=1 00:11:16.351 00:11:16.351 ' 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:16.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.351 --rc genhtml_branch_coverage=1 00:11:16.351 --rc genhtml_function_coverage=1 00:11:16.351 --rc genhtml_legend=1 00:11:16.351 --rc geninfo_all_blocks=1 00:11:16.351 --rc geninfo_unexecuted_blocks=1 00:11:16.351 00:11:16.351 ' 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:16.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.351 --rc genhtml_branch_coverage=1 00:11:16.351 --rc genhtml_function_coverage=1 00:11:16.351 --rc genhtml_legend=1 00:11:16.351 --rc geninfo_all_blocks=1 00:11:16.351 --rc geninfo_unexecuted_blocks=1 00:11:16.351 00:11:16.351 ' 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.351 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:16.352 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:22.922 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:22.922 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:22.922 Found net devices under 0000:af:00.0: cvl_0_0 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.922 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:22.923 Found net devices under 0000:af:00.1: cvl_0_1 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:22.923 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:23.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:11:23.182 00:11:23.182 --- 10.0.0.2 ping statistics --- 00:11:23.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.182 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:11:23.182 00:11:23.182 --- 10.0.0.1 ping statistics --- 00:11:23.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.182 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.182 ************************************ 00:11:23.182 START TEST nvmf_filesystem_no_in_capsule 00:11:23.182 ************************************ 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1543673 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1543673 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1543673 ']' 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.182 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.182 [2024-12-10 14:13:23.859159] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:11:23.182 [2024-12-10 14:13:23.859200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.441 [2024-12-10 14:13:23.942695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.441 [2024-12-10 14:13:23.983307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.441 [2024-12-10 14:13:23.983348] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.441 [2024-12-10 14:13:23.983355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.441 [2024-12-10 14:13:23.983361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.442 [2024-12-10 14:13:23.983366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.442 [2024-12-10 14:13:23.984804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.442 [2024-12-10 14:13:23.984837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.442 [2024-12-10 14:13:23.984946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.442 [2024-12-10 14:13:23.984947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.009 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.009 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:24.009 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.009 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.009 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.009 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.009 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:24.009 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:24.009 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.009 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.269 [2024-12-10 14:13:24.752781] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.269 Malloc1 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.269 [2024-12-10 14:13:24.908890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:24.269 { 00:11:24.269 "name": "Malloc1", 00:11:24.269 "aliases": [ 00:11:24.269 "671dad70-089a-4b1c-aacf-e0a16fd2e14f" 00:11:24.269 ], 00:11:24.269 "product_name": "Malloc disk", 00:11:24.269 "block_size": 512, 00:11:24.269 "num_blocks": 1048576, 00:11:24.269 "uuid": "671dad70-089a-4b1c-aacf-e0a16fd2e14f", 00:11:24.269 "assigned_rate_limits": { 00:11:24.269 "rw_ios_per_sec": 0, 00:11:24.269 "rw_mbytes_per_sec": 0, 00:11:24.269 "r_mbytes_per_sec": 0, 00:11:24.269 "w_mbytes_per_sec": 0 00:11:24.269 }, 00:11:24.269 "claimed": true, 00:11:24.269 "claim_type": "exclusive_write", 00:11:24.269 "zoned": false, 00:11:24.269 "supported_io_types": { 00:11:24.269 "read": true, 00:11:24.269 "write": true, 00:11:24.269 "unmap": true, 00:11:24.269 "flush": true, 00:11:24.269 "reset": true, 00:11:24.269 "nvme_admin": false, 00:11:24.269 "nvme_io": false, 00:11:24.269 "nvme_io_md": false, 00:11:24.269 "write_zeroes": true, 00:11:24.269 "zcopy": true, 00:11:24.269 "get_zone_info": false, 00:11:24.269 "zone_management": false, 00:11:24.269 "zone_append": false, 00:11:24.269 "compare": false, 00:11:24.269 "compare_and_write": false, 00:11:24.269 "abort": true, 00:11:24.269 "seek_hole": false, 00:11:24.269 "seek_data": false, 00:11:24.269 "copy": true, 00:11:24.269 "nvme_iov_md": false 00:11:24.269 }, 00:11:24.269 "memory_domains": [ 00:11:24.269 { 00:11:24.269 "dma_device_id": "system", 00:11:24.269 "dma_device_type": 1 00:11:24.269 }, 00:11:24.269 { 00:11:24.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.269 "dma_device_type": 2 00:11:24.269 } 00:11:24.269 ], 00:11:24.269 "driver_specific": {} 00:11:24.269 } 00:11:24.269 ]' 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:24.269 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:24.528 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:24.528 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:24.528 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:24.528 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:24.528 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:25.464 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:25.464 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:25.464 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:25.464 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:25.464 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:27.996 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:28.933 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:28.933 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:28.933 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:28.933 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.933 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.933 ************************************ 00:11:28.933 START TEST filesystem_ext4 00:11:28.933 ************************************ 00:11:28.933 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:28.933 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:28.933 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:28.933 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:28.933 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:28.933 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:28.933 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:28.933 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:28.933 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:28.933 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:28.933 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:28.933 mke2fs 1.47.0 (5-Feb-2023) 00:11:29.192 Discarding device blocks: 0/522240 done 00:11:29.192 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:29.192 Filesystem UUID: 0879cf22-d29f-49d9-8721-c4fc5f3f33b7 00:11:29.192 Superblock backups stored on blocks: 00:11:29.192 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:29.192 00:11:29.192 Allocating group tables: 0/64 done 00:11:29.192 Writing inode tables: 0/64 done 00:11:29.192 Creating journal (8192 blocks): done 00:11:29.192 Writing superblocks and filesystem accounting information: 0/64 done 00:11:29.192 00:11:29.192 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:29.192 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1543673 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:35.756 00:11:35.756 real 0m5.709s 00:11:35.756 user 0m0.031s 00:11:35.756 sys 0m0.068s 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:35.756 ************************************ 00:11:35.756 END TEST filesystem_ext4 00:11:35.756 ************************************ 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.756 ************************************ 00:11:35.756 START TEST filesystem_btrfs 00:11:35.756 ************************************ 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:35.756 btrfs-progs v6.8.1 00:11:35.756 See https://btrfs.readthedocs.io for more information. 00:11:35.756 00:11:35.756 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:35.756 NOTE: several default settings have changed in version 5.15, please make sure 00:11:35.756 this does not affect your deployments: 00:11:35.756 - DUP for metadata (-m dup) 00:11:35.756 - enabled no-holes (-O no-holes) 00:11:35.756 - enabled free-space-tree (-R free-space-tree) 00:11:35.756 00:11:35.756 Label: (null) 00:11:35.756 UUID: 3edff388-67a8-4b87-b7bc-2d56202fbd99 00:11:35.756 Node size: 16384 00:11:35.756 Sector size: 4096 (CPU page size: 4096) 00:11:35.756 Filesystem size: 510.00MiB 00:11:35.756 Block group profiles: 00:11:35.756 Data: single 8.00MiB 00:11:35.756 Metadata: DUP 32.00MiB 00:11:35.756 System: DUP 8.00MiB 00:11:35.756 SSD detected: yes 00:11:35.756 Zoned device: no 00:11:35.756 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:35.756 Checksum: crc32c 00:11:35.756 Number of devices: 1 00:11:35.756 Devices: 00:11:35.756 ID SIZE PATH 00:11:35.756 1 510.00MiB /dev/nvme0n1p1 00:11:35.756 00:11:35.756 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:35.757 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1543673 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:35.757 00:11:35.757 real 0m0.793s 00:11:35.757 user 0m0.020s 00:11:35.757 sys 0m0.125s 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:35.757 ************************************ 00:11:35.757 END TEST filesystem_btrfs 00:11:35.757 ************************************ 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.757 ************************************ 00:11:35.757 START TEST filesystem_xfs 00:11:35.757 ************************************ 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:35.757 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:35.757 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:35.757 = sectsz=512 attr=2, projid32bit=1 00:11:35.757 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:35.757 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:35.757 data = bsize=4096 blocks=130560, imaxpct=25 00:11:35.757 = sunit=0 swidth=0 blks 00:11:35.757 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:35.757 log =internal log bsize=4096 blocks=16384, version=2 00:11:35.757 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:35.757 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:36.692 Discarding blocks...Done. 00:11:36.692 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:36.692 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:39.225 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:39.225 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:39.225 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:39.225 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:39.225 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:39.225 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:39.225 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1543673 00:11:39.225 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:39.225 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:39.225 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:39.225 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:39.225 00:11:39.225 real 0m3.295s 00:11:39.225 user 0m0.020s 00:11:39.225 sys 0m0.078s 00:11:39.225 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.225 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:39.225 ************************************ 00:11:39.225 END TEST filesystem_xfs 00:11:39.225 ************************************ 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1543673 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1543673 ']' 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1543673 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1543673 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1543673' 00:11:39.226 killing process with pid 1543673 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1543673 00:11:39.226 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1543673 00:11:39.485 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:39.485 00:11:39.485 real 0m16.417s 00:11:39.485 user 1m4.681s 00:11:39.485 sys 0m1.449s 00:11:39.485 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.485 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.485 ************************************ 00:11:39.485 END TEST nvmf_filesystem_no_in_capsule 00:11:39.485 ************************************ 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:39.744 ************************************ 00:11:39.744 START TEST nvmf_filesystem_in_capsule 00:11:39.744 ************************************ 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1546620 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1546620 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1546620 ']' 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.744 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.744 [2024-12-10 14:13:40.354634] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:11:39.744 [2024-12-10 14:13:40.354678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.744 [2024-12-10 14:13:40.441556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.744 [2024-12-10 14:13:40.479976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.744 [2024-12-10 14:13:40.480011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.744 [2024-12-10 14:13:40.480019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.744 [2024-12-10 14:13:40.480026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.744 [2024-12-10 14:13:40.480032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.744 [2024-12-10 14:13:40.481580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.744 [2024-12-10 14:13:40.481686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.744 [2024-12-10 14:13:40.481784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.744 [2024-12-10 14:13:40.481784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.680 [2024-12-10 14:13:41.240940] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.680 Malloc1 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.680 [2024-12-10 14:13:41.396427] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.680 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.939 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.939 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:40.939 { 00:11:40.939 "name": "Malloc1", 00:11:40.939 "aliases": [ 00:11:40.939 "f4af97b6-9593-4f6d-82bb-e09e5726e859" 00:11:40.939 ], 00:11:40.939 "product_name": "Malloc disk", 00:11:40.939 "block_size": 512, 00:11:40.939 "num_blocks": 1048576, 00:11:40.939 "uuid": "f4af97b6-9593-4f6d-82bb-e09e5726e859", 00:11:40.939 "assigned_rate_limits": { 00:11:40.939 "rw_ios_per_sec": 0, 00:11:40.939 "rw_mbytes_per_sec": 0, 00:11:40.939 "r_mbytes_per_sec": 0, 00:11:40.939 "w_mbytes_per_sec": 0 00:11:40.939 }, 00:11:40.939 "claimed": true, 00:11:40.939 "claim_type": "exclusive_write", 00:11:40.939 "zoned": false, 00:11:40.939 "supported_io_types": { 00:11:40.939 "read": true, 00:11:40.939 "write": true, 00:11:40.939 "unmap": true, 00:11:40.939 "flush": true, 00:11:40.939 "reset": true, 00:11:40.939 "nvme_admin": false, 00:11:40.939 "nvme_io": false, 00:11:40.939 "nvme_io_md": false, 00:11:40.939 "write_zeroes": true, 00:11:40.939 "zcopy": true, 00:11:40.939 "get_zone_info": false, 00:11:40.939 "zone_management": false, 00:11:40.939 "zone_append": false, 00:11:40.939 "compare": false, 00:11:40.939 "compare_and_write": false, 00:11:40.939 "abort": true, 00:11:40.939 "seek_hole": false, 00:11:40.939 "seek_data": false, 00:11:40.939 "copy": true, 00:11:40.939 "nvme_iov_md": false 00:11:40.939 }, 00:11:40.939 "memory_domains": [ 00:11:40.939 { 00:11:40.939 "dma_device_id": "system", 00:11:40.939 "dma_device_type": 1 00:11:40.939 }, 00:11:40.939 { 00:11:40.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.939 "dma_device_type": 2 00:11:40.939 } 00:11:40.939 ], 00:11:40.939 "driver_specific": {} 00:11:40.939 } 00:11:40.939 ]' 00:11:40.939 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:40.939 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:40.939 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:40.939 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:40.939 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:40.939 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:40.939 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:40.939 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:41.875 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:41.875 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:41.875 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.875 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:41.875 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:44.405 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:44.662 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:45.596 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:45.596 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:45.596 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:45.596 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.596 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.596 ************************************ 00:11:45.596 START TEST filesystem_in_capsule_ext4 00:11:45.596 ************************************ 00:11:45.596 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:45.596 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:45.596 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:45.596 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:45.596 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:45.596 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:45.596 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:45.596 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:45.596 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:45.596 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:45.596 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:45.596 mke2fs 1.47.0 (5-Feb-2023) 00:11:45.854 Discarding device blocks: 0/522240 done 00:11:45.854 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:45.854 Filesystem UUID: a9f5950d-8cfb-4441-8262-70720a96f0a3 00:11:45.854 Superblock backups stored on blocks: 00:11:45.854 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:45.854 00:11:45.854 Allocating group tables: 0/64 done 00:11:45.854 Writing inode tables: 0/64 done 00:11:45.854 Creating journal (8192 blocks): done 00:11:47.357 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:11:47.357 00:11:47.357 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:47.357 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:53.922 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:53.922 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:53.922 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:53.922 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:53.922 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:53.922 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1546620 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:53.922 00:11:53.922 real 0m7.719s 00:11:53.922 user 0m0.027s 00:11:53.922 sys 0m0.077s 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:53.922 ************************************ 00:11:53.922 END TEST filesystem_in_capsule_ext4 00:11:53.922 ************************************ 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.922 ************************************ 00:11:53.922 START TEST filesystem_in_capsule_btrfs 00:11:53.922 ************************************ 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:53.922 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:53.922 btrfs-progs v6.8.1 00:11:53.922 See https://btrfs.readthedocs.io for more information. 00:11:53.922 00:11:53.922 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:53.922 NOTE: several default settings have changed in version 5.15, please make sure 00:11:53.922 this does not affect your deployments: 00:11:53.922 - DUP for metadata (-m dup) 00:11:53.922 - enabled no-holes (-O no-holes) 00:11:53.922 - enabled free-space-tree (-R free-space-tree) 00:11:53.922 00:11:53.922 Label: (null) 00:11:53.922 UUID: 6180efaa-e330-440b-b36e-24fdea8b48b2 00:11:53.922 Node size: 16384 00:11:53.922 Sector size: 4096 (CPU page size: 4096) 00:11:53.922 Filesystem size: 510.00MiB 00:11:53.922 Block group profiles: 00:11:53.923 Data: single 8.00MiB 00:11:53.923 Metadata: DUP 32.00MiB 00:11:53.923 System: DUP 8.00MiB 00:11:53.923 SSD detected: yes 00:11:53.923 Zoned device: no 00:11:53.923 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:53.923 Checksum: crc32c 00:11:53.923 Number of devices: 1 00:11:53.923 Devices: 00:11:53.923 ID SIZE PATH 00:11:53.923 1 510.00MiB /dev/nvme0n1p1 00:11:53.923 00:11:53.923 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:53.923 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1546620 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:54.182 00:11:54.182 real 0m0.705s 00:11:54.182 user 0m0.025s 00:11:54.182 sys 0m0.116s 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:54.182 ************************************ 00:11:54.182 END TEST filesystem_in_capsule_btrfs 00:11:54.182 ************************************ 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.182 ************************************ 00:11:54.182 START TEST filesystem_in_capsule_xfs 00:11:54.182 ************************************ 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:54.182 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:54.441 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:54.441 = sectsz=512 attr=2, projid32bit=1 00:11:54.441 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:54.441 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:54.441 data = bsize=4096 blocks=130560, imaxpct=25 00:11:54.441 = sunit=0 swidth=0 blks 00:11:54.441 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:54.441 log =internal log bsize=4096 blocks=16384, version=2 00:11:54.441 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:54.441 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:55.376 Discarding blocks...Done. 00:11:55.376 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:55.376 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:57.908 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:57.908 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:57.908 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:57.908 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:57.908 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:57.908 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:57.908 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1546620 00:11:57.908 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:57.909 00:11:57.909 real 0m3.542s 00:11:57.909 user 0m0.030s 00:11:57.909 sys 0m0.070s 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:57.909 ************************************ 00:11:57.909 END TEST filesystem_in_capsule_xfs 00:11:57.909 ************************************ 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.909 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.168 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.168 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:58.168 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1546620 00:11:58.168 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1546620 ']' 00:11:58.168 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1546620 00:11:58.168 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:58.168 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.168 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1546620 00:11:58.168 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.168 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.168 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1546620' 00:11:58.168 killing process with pid 1546620 00:11:58.168 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1546620 00:11:58.168 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1546620 00:11:58.427 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:58.427 00:11:58.427 real 0m18.727s 00:11:58.427 user 1m13.902s 00:11:58.427 sys 0m1.436s 00:11:58.427 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.427 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.427 ************************************ 00:11:58.427 END TEST nvmf_filesystem_in_capsule 00:11:58.427 ************************************ 00:11:58.427 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:58.427 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:58.427 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:58.427 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:58.427 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:58.427 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:58.427 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:58.427 rmmod nvme_tcp 00:11:58.427 rmmod nvme_fabrics 00:11:58.427 rmmod nvme_keyring 00:11:58.428 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:58.428 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:58.428 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:58.428 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:58.428 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:58.428 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:58.428 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:58.428 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:58.428 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:58.428 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:58.428 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:58.428 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:58.428 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:58.428 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.428 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.428 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:00.964 00:12:00.964 real 0m44.779s 00:12:00.964 user 2m20.841s 00:12:00.964 sys 0m8.275s 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.964 ************************************ 00:12:00.964 END TEST nvmf_filesystem 00:12:00.964 ************************************ 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:00.964 ************************************ 00:12:00.964 START TEST nvmf_target_discovery 00:12:00.964 ************************************ 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:00.964 * Looking for test storage... 00:12:00.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:00.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.964 --rc genhtml_branch_coverage=1 00:12:00.964 --rc genhtml_function_coverage=1 00:12:00.964 --rc genhtml_legend=1 00:12:00.964 --rc geninfo_all_blocks=1 00:12:00.964 --rc geninfo_unexecuted_blocks=1 00:12:00.964 00:12:00.964 ' 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:00.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.964 --rc genhtml_branch_coverage=1 00:12:00.964 --rc genhtml_function_coverage=1 00:12:00.964 --rc genhtml_legend=1 00:12:00.964 --rc geninfo_all_blocks=1 00:12:00.964 --rc geninfo_unexecuted_blocks=1 00:12:00.964 00:12:00.964 ' 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:00.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.964 --rc genhtml_branch_coverage=1 00:12:00.964 --rc genhtml_function_coverage=1 00:12:00.964 --rc genhtml_legend=1 00:12:00.964 --rc geninfo_all_blocks=1 00:12:00.964 --rc geninfo_unexecuted_blocks=1 00:12:00.964 00:12:00.964 ' 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:00.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.964 --rc genhtml_branch_coverage=1 00:12:00.964 --rc genhtml_function_coverage=1 00:12:00.964 --rc genhtml_legend=1 00:12:00.964 --rc geninfo_all_blocks=1 00:12:00.964 --rc geninfo_unexecuted_blocks=1 00:12:00.964 00:12:00.964 ' 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.964 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:00.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:00.965 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.535 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.535 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.535 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.535 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.535 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.535 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.535 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.535 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.535 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.535 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:07.536 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:07.536 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:07.536 Found net devices under 0000:af:00.0: cvl_0_0 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:07.536 Found net devices under 0000:af:00.1: cvl_0_1 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.536 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:12:07.536 00:12:07.536 --- 10.0.0.2 ping statistics --- 00:12:07.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.536 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:12:07.536 00:12:07.536 --- 10.0.0.1 ping statistics --- 00:12:07.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.536 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:07.536 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:07.795 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:07.795 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.795 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:07.795 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.795 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1553796 00:12:07.795 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1553796 00:12:07.795 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.795 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1553796 ']' 00:12:07.795 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.795 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.795 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.795 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.795 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.795 [2024-12-10 14:14:08.346193] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:12:07.795 [2024-12-10 14:14:08.346254] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.795 [2024-12-10 14:14:08.431956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.795 [2024-12-10 14:14:08.472662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.795 [2024-12-10 14:14:08.472700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.795 [2024-12-10 14:14:08.472707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.795 [2024-12-10 14:14:08.472713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.795 [2024-12-10 14:14:08.472718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.795 [2024-12-10 14:14:08.474242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.795 [2024-12-10 14:14:08.474349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.795 [2024-12-10 14:14:08.474460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.795 [2024-12-10 14:14:08.474460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.730 [2024-12-10 14:14:09.230300] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.730 Null1 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.730 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 [2024-12-10 14:14:09.288336] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 Null2 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 Null3 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 Null4 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.731 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:08.990 00:12:08.990 Discovery Log Number of Records 6, Generation counter 6 00:12:08.990 =====Discovery Log Entry 0====== 00:12:08.990 trtype: tcp 00:12:08.991 adrfam: ipv4 00:12:08.991 subtype: current discovery subsystem 00:12:08.991 treq: not required 00:12:08.991 portid: 0 00:12:08.991 trsvcid: 4420 00:12:08.991 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:08.991 traddr: 10.0.0.2 00:12:08.991 eflags: explicit discovery connections, duplicate discovery information 00:12:08.991 sectype: none 00:12:08.991 =====Discovery Log Entry 1====== 00:12:08.991 trtype: tcp 00:12:08.991 adrfam: ipv4 00:12:08.991 subtype: nvme subsystem 00:12:08.991 treq: not required 00:12:08.991 portid: 0 00:12:08.991 trsvcid: 4420 00:12:08.991 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:08.991 traddr: 10.0.0.2 00:12:08.991 eflags: none 00:12:08.991 sectype: none 00:12:08.991 =====Discovery Log Entry 2====== 00:12:08.991 trtype: tcp 00:12:08.991 adrfam: ipv4 00:12:08.991 subtype: nvme subsystem 00:12:08.991 treq: not required 00:12:08.991 portid: 0 00:12:08.991 trsvcid: 4420 00:12:08.991 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:08.991 traddr: 10.0.0.2 00:12:08.991 eflags: none 00:12:08.991 sectype: none 00:12:08.991 =====Discovery Log Entry 3====== 00:12:08.991 trtype: tcp 00:12:08.991 adrfam: ipv4 00:12:08.991 subtype: nvme subsystem 00:12:08.991 treq: not required 00:12:08.991 portid: 0 00:12:08.991 trsvcid: 4420 00:12:08.991 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:08.991 traddr: 10.0.0.2 00:12:08.991 eflags: none 00:12:08.991 sectype: none 00:12:08.991 =====Discovery Log Entry 4====== 00:12:08.991 trtype: tcp 00:12:08.991 adrfam: ipv4 00:12:08.991 subtype: nvme subsystem 00:12:08.991 treq: not required 00:12:08.991 portid: 0 00:12:08.991 trsvcid: 4420 00:12:08.991 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:08.991 traddr: 10.0.0.2 00:12:08.991 eflags: none 00:12:08.991 sectype: none 00:12:08.991 =====Discovery Log Entry 5====== 00:12:08.991 trtype: tcp 00:12:08.991 adrfam: ipv4 00:12:08.991 subtype: discovery subsystem referral 00:12:08.991 treq: not required 00:12:08.991 portid: 0 00:12:08.991 trsvcid: 4430 00:12:08.991 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:08.991 traddr: 10.0.0.2 00:12:08.991 eflags: none 00:12:08.991 sectype: none 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:08.991 Perform nvmf subsystem discovery via RPC 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.991 [ 00:12:08.991 { 00:12:08.991 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:08.991 "subtype": "Discovery", 00:12:08.991 "listen_addresses": [ 00:12:08.991 { 00:12:08.991 "trtype": "TCP", 00:12:08.991 "adrfam": "IPv4", 00:12:08.991 "traddr": "10.0.0.2", 00:12:08.991 "trsvcid": "4420" 00:12:08.991 } 00:12:08.991 ], 00:12:08.991 "allow_any_host": true, 00:12:08.991 "hosts": [] 00:12:08.991 }, 00:12:08.991 { 00:12:08.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:08.991 "subtype": "NVMe", 00:12:08.991 "listen_addresses": [ 00:12:08.991 { 00:12:08.991 "trtype": "TCP", 00:12:08.991 "adrfam": "IPv4", 00:12:08.991 "traddr": "10.0.0.2", 00:12:08.991 "trsvcid": "4420" 00:12:08.991 } 00:12:08.991 ], 00:12:08.991 "allow_any_host": true, 00:12:08.991 "hosts": [], 00:12:08.991 "serial_number": "SPDK00000000000001", 00:12:08.991 "model_number": "SPDK bdev Controller", 00:12:08.991 "max_namespaces": 32, 00:12:08.991 "min_cntlid": 1, 00:12:08.991 "max_cntlid": 65519, 00:12:08.991 "namespaces": [ 00:12:08.991 { 00:12:08.991 "nsid": 1, 00:12:08.991 "bdev_name": "Null1", 00:12:08.991 "name": "Null1", 00:12:08.991 "nguid": "4AC3D103275C4BE9B5752621D65810C2", 00:12:08.991 "uuid": "4ac3d103-275c-4be9-b575-2621d65810c2" 00:12:08.991 } 00:12:08.991 ] 00:12:08.991 }, 00:12:08.991 { 00:12:08.991 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:08.991 "subtype": "NVMe", 00:12:08.991 "listen_addresses": [ 00:12:08.991 { 00:12:08.991 "trtype": "TCP", 00:12:08.991 "adrfam": "IPv4", 00:12:08.991 "traddr": "10.0.0.2", 00:12:08.991 "trsvcid": "4420" 00:12:08.991 } 00:12:08.991 ], 00:12:08.991 "allow_any_host": true, 00:12:08.991 "hosts": [], 00:12:08.991 "serial_number": "SPDK00000000000002", 00:12:08.991 "model_number": "SPDK bdev Controller", 00:12:08.991 "max_namespaces": 32, 00:12:08.991 "min_cntlid": 1, 00:12:08.991 "max_cntlid": 65519, 00:12:08.991 "namespaces": [ 00:12:08.991 { 00:12:08.991 "nsid": 1, 00:12:08.991 "bdev_name": "Null2", 00:12:08.991 "name": "Null2", 00:12:08.991 "nguid": "4939C7D64B674086BFEDFB79967F0CC0", 00:12:08.991 "uuid": "4939c7d6-4b67-4086-bfed-fb79967f0cc0" 00:12:08.991 } 00:12:08.991 ] 00:12:08.991 }, 00:12:08.991 { 00:12:08.991 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:08.991 "subtype": "NVMe", 00:12:08.991 "listen_addresses": [ 00:12:08.991 { 00:12:08.991 "trtype": "TCP", 00:12:08.991 "adrfam": "IPv4", 00:12:08.991 "traddr": "10.0.0.2", 00:12:08.991 "trsvcid": "4420" 00:12:08.991 } 00:12:08.991 ], 00:12:08.991 "allow_any_host": true, 00:12:08.991 "hosts": [], 00:12:08.991 "serial_number": "SPDK00000000000003", 00:12:08.991 "model_number": "SPDK bdev Controller", 00:12:08.991 "max_namespaces": 32, 00:12:08.991 "min_cntlid": 1, 00:12:08.991 "max_cntlid": 65519, 00:12:08.991 "namespaces": [ 00:12:08.991 { 00:12:08.991 "nsid": 1, 00:12:08.991 "bdev_name": "Null3", 00:12:08.991 "name": "Null3", 00:12:08.991 "nguid": "6073A07DA22B44208189C1DECDF46E3B", 00:12:08.991 "uuid": "6073a07d-a22b-4420-8189-c1decdf46e3b" 00:12:08.991 } 00:12:08.991 ] 00:12:08.991 }, 00:12:08.991 { 00:12:08.991 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:08.991 "subtype": "NVMe", 00:12:08.991 "listen_addresses": [ 00:12:08.991 { 00:12:08.991 "trtype": "TCP", 00:12:08.991 "adrfam": "IPv4", 00:12:08.991 "traddr": "10.0.0.2", 00:12:08.991 "trsvcid": "4420" 00:12:08.991 } 00:12:08.991 ], 00:12:08.991 "allow_any_host": true, 00:12:08.991 "hosts": [], 00:12:08.991 "serial_number": "SPDK00000000000004", 00:12:08.991 "model_number": "SPDK bdev Controller", 00:12:08.991 "max_namespaces": 32, 00:12:08.991 "min_cntlid": 1, 00:12:08.991 "max_cntlid": 65519, 00:12:08.991 "namespaces": [ 00:12:08.991 { 00:12:08.991 "nsid": 1, 00:12:08.991 "bdev_name": "Null4", 00:12:08.991 "name": "Null4", 00:12:08.991 "nguid": "0E340084C021475AA80CC3764D922E2D", 00:12:08.991 "uuid": "0e340084-c021-475a-a80c-c3764d922e2d" 00:12:08.991 } 00:12:08.991 ] 00:12:08.991 } 00:12:08.991 ] 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:08.991 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.992 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.992 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.992 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:08.992 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.992 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:09.251 rmmod nvme_tcp 00:12:09.251 rmmod nvme_fabrics 00:12:09.251 rmmod nvme_keyring 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1553796 ']' 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1553796 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1553796 ']' 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1553796 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1553796 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1553796' 00:12:09.251 killing process with pid 1553796 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1553796 00:12:09.251 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1553796 00:12:09.511 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:09.511 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:09.511 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:09.511 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:09.511 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:09.511 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:09.511 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:09.511 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:09.511 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:09.511 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.511 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.511 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.416 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:11.416 00:12:11.416 real 0m10.855s 00:12:11.416 user 0m8.549s 00:12:11.416 sys 0m5.520s 00:12:11.416 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.416 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:11.416 ************************************ 00:12:11.416 END TEST nvmf_target_discovery 00:12:11.416 ************************************ 00:12:11.678 14:14:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:11.678 14:14:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:11.678 14:14:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.678 14:14:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:11.678 ************************************ 00:12:11.678 START TEST nvmf_referrals 00:12:11.678 ************************************ 00:12:11.678 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:11.678 * Looking for test storage... 00:12:11.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:11.678 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:11.678 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:11.678 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:11.678 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:11.678 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.678 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.678 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.678 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.678 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.678 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:11.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.679 --rc genhtml_branch_coverage=1 00:12:11.679 --rc genhtml_function_coverage=1 00:12:11.679 --rc genhtml_legend=1 00:12:11.679 --rc geninfo_all_blocks=1 00:12:11.679 --rc geninfo_unexecuted_blocks=1 00:12:11.679 00:12:11.679 ' 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:11.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.679 --rc genhtml_branch_coverage=1 00:12:11.679 --rc genhtml_function_coverage=1 00:12:11.679 --rc genhtml_legend=1 00:12:11.679 --rc geninfo_all_blocks=1 00:12:11.679 --rc geninfo_unexecuted_blocks=1 00:12:11.679 00:12:11.679 ' 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:11.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.679 --rc genhtml_branch_coverage=1 00:12:11.679 --rc genhtml_function_coverage=1 00:12:11.679 --rc genhtml_legend=1 00:12:11.679 --rc geninfo_all_blocks=1 00:12:11.679 --rc geninfo_unexecuted_blocks=1 00:12:11.679 00:12:11.679 ' 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:11.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.679 --rc genhtml_branch_coverage=1 00:12:11.679 --rc genhtml_function_coverage=1 00:12:11.679 --rc genhtml_legend=1 00:12:11.679 --rc geninfo_all_blocks=1 00:12:11.679 --rc geninfo_unexecuted_blocks=1 00:12:11.679 00:12:11.679 ' 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:11.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.679 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.939 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:11.939 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:11.939 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:11.939 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:18.646 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:18.646 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.646 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:18.647 Found net devices under 0000:af:00.0: cvl_0_0 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:18.647 Found net devices under 0000:af:00.1: cvl_0_1 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:18.647 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:18.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:12:18.647 00:12:18.647 --- 10.0.0.2 ping statistics --- 00:12:18.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.647 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:12:18.647 00:12:18.647 --- 10.0.0.1 ping statistics --- 00:12:18.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.647 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1557956 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1557956 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1557956 ']' 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.647 [2024-12-10 14:14:19.273970] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:12:18.647 [2024-12-10 14:14:19.274013] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.647 [2024-12-10 14:14:19.359571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.906 [2024-12-10 14:14:19.403137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.906 [2024-12-10 14:14:19.403173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.906 [2024-12-10 14:14:19.403180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.906 [2024-12-10 14:14:19.403186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.906 [2024-12-10 14:14:19.403191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.906 [2024-12-10 14:14:19.404752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.906 [2024-12-10 14:14:19.404875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.906 [2024-12-10 14:14:19.404981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.906 [2024-12-10 14:14:19.404982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.473 [2024-12-10 14:14:20.160766] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.473 [2024-12-10 14:14:20.190381] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.473 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:19.732 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:19.991 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:20.250 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:20.509 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:20.509 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:20.509 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:20.509 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:20.509 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:20.509 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:20.509 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:20.767 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:21.026 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:21.026 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:21.026 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:21.026 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:21.026 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:21.026 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:21.026 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:21.026 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:21.026 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:21.026 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:21.026 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:21.026 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:21.026 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:21.285 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:21.285 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:21.285 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:21.285 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:21.285 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:21.285 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:21.285 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:21.285 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:21.285 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.285 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.285 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.285 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:21.544 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:21.544 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.544 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.544 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.544 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:21.544 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:21.544 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:21.544 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:21.544 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:21.544 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:21.544 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:21.803 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:21.804 rmmod nvme_tcp 00:12:21.804 rmmod nvme_fabrics 00:12:21.804 rmmod nvme_keyring 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1557956 ']' 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1557956 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1557956 ']' 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1557956 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1557956 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1557956' 00:12:21.804 killing process with pid 1557956 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1557956 00:12:21.804 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1557956 00:12:22.063 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:22.063 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:22.063 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:22.063 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:22.063 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:22.063 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:22.063 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:22.063 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:22.063 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:22.063 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.063 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.063 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.968 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:23.968 00:12:23.968 real 0m12.443s 00:12:23.968 user 0m15.553s 00:12:23.968 sys 0m5.893s 00:12:23.968 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.968 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.968 ************************************ 00:12:23.968 END TEST nvmf_referrals 00:12:23.968 ************************************ 00:12:23.968 14:14:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:23.968 14:14:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:23.968 14:14:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.968 14:14:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.228 ************************************ 00:12:24.228 START TEST nvmf_connect_disconnect 00:12:24.228 ************************************ 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:24.228 * Looking for test storage... 00:12:24.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:24.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.228 --rc genhtml_branch_coverage=1 00:12:24.228 --rc genhtml_function_coverage=1 00:12:24.228 --rc genhtml_legend=1 00:12:24.228 --rc geninfo_all_blocks=1 00:12:24.228 --rc geninfo_unexecuted_blocks=1 00:12:24.228 00:12:24.228 ' 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:24.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.228 --rc genhtml_branch_coverage=1 00:12:24.228 --rc genhtml_function_coverage=1 00:12:24.228 --rc genhtml_legend=1 00:12:24.228 --rc geninfo_all_blocks=1 00:12:24.228 --rc geninfo_unexecuted_blocks=1 00:12:24.228 00:12:24.228 ' 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:24.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.228 --rc genhtml_branch_coverage=1 00:12:24.228 --rc genhtml_function_coverage=1 00:12:24.228 --rc genhtml_legend=1 00:12:24.228 --rc geninfo_all_blocks=1 00:12:24.228 --rc geninfo_unexecuted_blocks=1 00:12:24.228 00:12:24.228 ' 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:24.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.228 --rc genhtml_branch_coverage=1 00:12:24.228 --rc genhtml_function_coverage=1 00:12:24.228 --rc genhtml_legend=1 00:12:24.228 --rc geninfo_all_blocks=1 00:12:24.228 --rc geninfo_unexecuted_blocks=1 00:12:24.228 00:12:24.228 ' 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:24.228 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:24.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:24.229 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:30.795 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.795 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:30.795 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:30.796 Found net devices under 0000:af:00.0: cvl_0_0 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:30.796 Found net devices under 0000:af:00.1: cvl_0_1 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:30.796 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.055 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.055 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.055 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:31.055 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.055 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.055 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.055 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:31.055 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:31.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:12:31.055 00:12:31.055 --- 10.0.0.2 ping statistics --- 00:12:31.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.055 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:12:31.055 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:12:31.055 00:12:31.055 --- 10.0.0.1 ping statistics --- 00:12:31.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.055 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:12:31.055 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.055 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:31.055 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:31.055 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.055 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:31.055 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:31.055 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.055 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:31.056 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:31.056 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:31.056 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:31.056 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:31.056 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.056 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1562456 00:12:31.056 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.056 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1562456 00:12:31.056 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1562456 ']' 00:12:31.056 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.056 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.056 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.056 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.056 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.315 [2024-12-10 14:14:31.823797] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:12:31.315 [2024-12-10 14:14:31.823845] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.315 [2024-12-10 14:14:31.906364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.315 [2024-12-10 14:14:31.947359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.315 [2024-12-10 14:14:31.947393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.315 [2024-12-10 14:14:31.947401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.315 [2024-12-10 14:14:31.947407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.315 [2024-12-10 14:14:31.947412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.315 [2024-12-10 14:14:31.948957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.315 [2024-12-10 14:14:31.949066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.315 [2024-12-10 14:14:31.949176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.315 [2024-12-10 14:14:31.949177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.315 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.315 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:31.315 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:31.315 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:31.315 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.574 [2024-12-10 14:14:32.086847] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.574 [2024-12-10 14:14:32.155364] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:31.574 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:34.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.002 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:48.002 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:48.002 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:48.002 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:48.003 rmmod nvme_tcp 00:12:48.003 rmmod nvme_fabrics 00:12:48.003 rmmod nvme_keyring 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1562456 ']' 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1562456 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1562456 ']' 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1562456 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1562456 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1562456' 00:12:48.003 killing process with pid 1562456 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1562456 00:12:48.003 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1562456 00:12:48.262 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:48.262 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:48.262 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:48.262 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:48.262 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:48.262 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:48.262 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:48.262 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:48.262 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:48.262 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.262 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.262 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.164 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:50.164 00:12:50.164 real 0m26.136s 00:12:50.164 user 1m8.774s 00:12:50.164 sys 0m6.485s 00:12:50.164 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.164 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:50.164 ************************************ 00:12:50.164 END TEST nvmf_connect_disconnect 00:12:50.164 ************************************ 00:12:50.164 14:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:50.164 14:14:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:50.164 14:14:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.164 14:14:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.423 ************************************ 00:12:50.423 START TEST nvmf_multitarget 00:12:50.423 ************************************ 00:12:50.423 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:50.423 * Looking for test storage... 00:12:50.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:50.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.423 --rc genhtml_branch_coverage=1 00:12:50.423 --rc genhtml_function_coverage=1 00:12:50.423 --rc genhtml_legend=1 00:12:50.423 --rc geninfo_all_blocks=1 00:12:50.423 --rc geninfo_unexecuted_blocks=1 00:12:50.423 00:12:50.423 ' 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:50.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.423 --rc genhtml_branch_coverage=1 00:12:50.423 --rc genhtml_function_coverage=1 00:12:50.423 --rc genhtml_legend=1 00:12:50.423 --rc geninfo_all_blocks=1 00:12:50.423 --rc geninfo_unexecuted_blocks=1 00:12:50.423 00:12:50.423 ' 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:50.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.423 --rc genhtml_branch_coverage=1 00:12:50.423 --rc genhtml_function_coverage=1 00:12:50.423 --rc genhtml_legend=1 00:12:50.423 --rc geninfo_all_blocks=1 00:12:50.423 --rc geninfo_unexecuted_blocks=1 00:12:50.423 00:12:50.423 ' 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:50.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.423 --rc genhtml_branch_coverage=1 00:12:50.423 --rc genhtml_function_coverage=1 00:12:50.423 --rc genhtml_legend=1 00:12:50.423 --rc geninfo_all_blocks=1 00:12:50.423 --rc geninfo_unexecuted_blocks=1 00:12:50.423 00:12:50.423 ' 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.423 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:50.424 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:56.991 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:56.991 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:56.992 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:56.992 Found net devices under 0000:af:00.0: cvl_0_0 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:56.992 Found net devices under 0000:af:00.1: cvl_0_1 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:56.992 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:57.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:12:57.252 00:12:57.252 --- 10.0.0.2 ping statistics --- 00:12:57.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.252 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:12:57.252 00:12:57.252 --- 10.0.0.1 ping statistics --- 00:12:57.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.252 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1569241 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1569241 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1569241 ']' 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.252 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:57.511 [2024-12-10 14:14:58.009136] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:12:57.511 [2024-12-10 14:14:58.009188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.511 [2024-12-10 14:14:58.093837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.511 [2024-12-10 14:14:58.133836] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.511 [2024-12-10 14:14:58.133874] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.511 [2024-12-10 14:14:58.133882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.511 [2024-12-10 14:14:58.133889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.511 [2024-12-10 14:14:58.133895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.511 [2024-12-10 14:14:58.135400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.511 [2024-12-10 14:14:58.135512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.511 [2024-12-10 14:14:58.135617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.511 [2024-12-10 14:14:58.135619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.447 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.447 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:58.447 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:58.447 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:58.447 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:58.447 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.447 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:58.447 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:58.447 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:58.447 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:58.447 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:58.447 "nvmf_tgt_1" 00:12:58.447 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:58.706 "nvmf_tgt_2" 00:12:58.706 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:58.706 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:58.706 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:58.706 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:58.706 true 00:12:58.706 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:58.964 true 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:58.964 rmmod nvme_tcp 00:12:58.964 rmmod nvme_fabrics 00:12:58.964 rmmod nvme_keyring 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1569241 ']' 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1569241 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1569241 ']' 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1569241 00:12:58.964 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:58.965 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.223 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1569241 00:12:59.224 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.224 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.224 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1569241' 00:12:59.224 killing process with pid 1569241 00:12:59.224 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1569241 00:12:59.224 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1569241 00:12:59.224 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:59.224 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:59.224 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:59.224 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:59.224 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:59.224 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:59.224 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:59.224 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:59.224 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:59.224 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.224 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.224 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.759 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:01.759 00:13:01.759 real 0m11.055s 00:13:01.759 user 0m9.941s 00:13:01.759 sys 0m5.603s 00:13:01.759 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.759 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:01.759 ************************************ 00:13:01.759 END TEST nvmf_multitarget 00:13:01.759 ************************************ 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:01.759 ************************************ 00:13:01.759 START TEST nvmf_rpc 00:13:01.759 ************************************ 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:01.759 * Looking for test storage... 00:13:01.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:01.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.759 --rc genhtml_branch_coverage=1 00:13:01.759 --rc genhtml_function_coverage=1 00:13:01.759 --rc genhtml_legend=1 00:13:01.759 --rc geninfo_all_blocks=1 00:13:01.759 --rc geninfo_unexecuted_blocks=1 00:13:01.759 00:13:01.759 ' 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:01.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.759 --rc genhtml_branch_coverage=1 00:13:01.759 --rc genhtml_function_coverage=1 00:13:01.759 --rc genhtml_legend=1 00:13:01.759 --rc geninfo_all_blocks=1 00:13:01.759 --rc geninfo_unexecuted_blocks=1 00:13:01.759 00:13:01.759 ' 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:01.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.759 --rc genhtml_branch_coverage=1 00:13:01.759 --rc genhtml_function_coverage=1 00:13:01.759 --rc genhtml_legend=1 00:13:01.759 --rc geninfo_all_blocks=1 00:13:01.759 --rc geninfo_unexecuted_blocks=1 00:13:01.759 00:13:01.759 ' 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:01.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.759 --rc genhtml_branch_coverage=1 00:13:01.759 --rc genhtml_function_coverage=1 00:13:01.759 --rc genhtml_legend=1 00:13:01.759 --rc geninfo_all_blocks=1 00:13:01.759 --rc geninfo_unexecuted_blocks=1 00:13:01.759 00:13:01.759 ' 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:01.759 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:01.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:01.760 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:08.328 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:08.329 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:08.329 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:08.329 Found net devices under 0000:af:00.0: cvl_0_0 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:08.329 Found net devices under 0000:af:00.1: cvl_0_1 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:08.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:13:08.329 00:13:08.329 --- 10.0.0.2 ping statistics --- 00:13:08.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.329 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:13:08.329 00:13:08.329 --- 10.0.0.1 ping statistics --- 00:13:08.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.329 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:08.329 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:08.329 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:08.329 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:08.329 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:08.329 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.329 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1574014 00:13:08.329 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1574014 00:13:08.329 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:08.329 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1574014 ']' 00:13:08.329 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.329 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.329 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.329 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.329 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.329 [2024-12-10 14:15:09.061465] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:13:08.329 [2024-12-10 14:15:09.061506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.587 [2024-12-10 14:15:09.133702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:08.587 [2024-12-10 14:15:09.175192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.587 [2024-12-10 14:15:09.175232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.587 [2024-12-10 14:15:09.175242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.588 [2024-12-10 14:15:09.175248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.588 [2024-12-10 14:15:09.175253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.588 [2024-12-10 14:15:09.179234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.588 [2024-12-10 14:15:09.179268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.588 [2024-12-10 14:15:09.179373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.588 [2024-12-10 14:15:09.179374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.588 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.588 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:08.588 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:08.588 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:08.588 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.588 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.588 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:08.588 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.588 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:08.846 "tick_rate": 2100000000, 00:13:08.846 "poll_groups": [ 00:13:08.846 { 00:13:08.846 "name": "nvmf_tgt_poll_group_000", 00:13:08.846 "admin_qpairs": 0, 00:13:08.846 "io_qpairs": 0, 00:13:08.846 "current_admin_qpairs": 0, 00:13:08.846 "current_io_qpairs": 0, 00:13:08.846 "pending_bdev_io": 0, 00:13:08.846 "completed_nvme_io": 0, 00:13:08.846 "transports": [] 00:13:08.846 }, 00:13:08.846 { 00:13:08.846 "name": "nvmf_tgt_poll_group_001", 00:13:08.846 "admin_qpairs": 0, 00:13:08.846 "io_qpairs": 0, 00:13:08.846 "current_admin_qpairs": 0, 00:13:08.846 "current_io_qpairs": 0, 00:13:08.846 "pending_bdev_io": 0, 00:13:08.846 "completed_nvme_io": 0, 00:13:08.846 "transports": [] 00:13:08.846 }, 00:13:08.846 { 00:13:08.846 "name": "nvmf_tgt_poll_group_002", 00:13:08.846 "admin_qpairs": 0, 00:13:08.846 "io_qpairs": 0, 00:13:08.846 "current_admin_qpairs": 0, 00:13:08.846 "current_io_qpairs": 0, 00:13:08.846 "pending_bdev_io": 0, 00:13:08.846 "completed_nvme_io": 0, 00:13:08.846 "transports": [] 00:13:08.846 }, 00:13:08.846 { 00:13:08.846 "name": "nvmf_tgt_poll_group_003", 00:13:08.846 "admin_qpairs": 0, 00:13:08.846 "io_qpairs": 0, 00:13:08.846 "current_admin_qpairs": 0, 00:13:08.846 "current_io_qpairs": 0, 00:13:08.846 "pending_bdev_io": 0, 00:13:08.846 "completed_nvme_io": 0, 00:13:08.846 "transports": [] 00:13:08.846 } 00:13:08.846 ] 00:13:08.846 }' 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.846 [2024-12-10 14:15:09.432323] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.846 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:08.846 "tick_rate": 2100000000, 00:13:08.846 "poll_groups": [ 00:13:08.846 { 00:13:08.846 "name": "nvmf_tgt_poll_group_000", 00:13:08.846 "admin_qpairs": 0, 00:13:08.846 "io_qpairs": 0, 00:13:08.846 "current_admin_qpairs": 0, 00:13:08.846 "current_io_qpairs": 0, 00:13:08.846 "pending_bdev_io": 0, 00:13:08.846 "completed_nvme_io": 0, 00:13:08.846 "transports": [ 00:13:08.846 { 00:13:08.846 "trtype": "TCP" 00:13:08.846 } 00:13:08.846 ] 00:13:08.846 }, 00:13:08.846 { 00:13:08.846 "name": "nvmf_tgt_poll_group_001", 00:13:08.846 "admin_qpairs": 0, 00:13:08.846 "io_qpairs": 0, 00:13:08.846 "current_admin_qpairs": 0, 00:13:08.846 "current_io_qpairs": 0, 00:13:08.846 "pending_bdev_io": 0, 00:13:08.846 "completed_nvme_io": 0, 00:13:08.846 "transports": [ 00:13:08.846 { 00:13:08.846 "trtype": "TCP" 00:13:08.846 } 00:13:08.846 ] 00:13:08.846 }, 00:13:08.846 { 00:13:08.846 "name": "nvmf_tgt_poll_group_002", 00:13:08.846 "admin_qpairs": 0, 00:13:08.846 "io_qpairs": 0, 00:13:08.846 "current_admin_qpairs": 0, 00:13:08.846 "current_io_qpairs": 0, 00:13:08.847 "pending_bdev_io": 0, 00:13:08.847 "completed_nvme_io": 0, 00:13:08.847 "transports": [ 00:13:08.847 { 00:13:08.847 "trtype": "TCP" 00:13:08.847 } 00:13:08.847 ] 00:13:08.847 }, 00:13:08.847 { 00:13:08.847 "name": "nvmf_tgt_poll_group_003", 00:13:08.847 "admin_qpairs": 0, 00:13:08.847 "io_qpairs": 0, 00:13:08.847 "current_admin_qpairs": 0, 00:13:08.847 "current_io_qpairs": 0, 00:13:08.847 "pending_bdev_io": 0, 00:13:08.847 "completed_nvme_io": 0, 00:13:08.847 "transports": [ 00:13:08.847 { 00:13:08.847 "trtype": "TCP" 00:13:08.847 } 00:13:08.847 ] 00:13:08.847 } 00:13:08.847 ] 00:13:08.847 }' 00:13:08.847 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:08.847 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:08.847 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:08.847 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:08.847 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:08.847 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:08.847 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:08.847 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:08.847 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:08.847 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:08.847 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:08.847 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:08.847 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:08.847 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:08.847 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.847 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.105 Malloc1 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.105 [2024-12-10 14:15:09.622065] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:09.105 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:09.106 [2024-12-10 14:15:09.656659] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:13:09.106 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:09.106 could not add new controller: failed to write to nvme-fabrics device 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.106 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.480 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.480 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:10.480 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.480 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:10.480 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:12.380 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:12.380 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:12.380 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.380 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:12.380 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.380 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:12.380 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.380 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.380 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:12.380 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:12.380 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.380 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:12.381 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.381 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:12.381 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:12.381 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.381 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.381 [2024-12-10 14:15:13.032692] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:13:12.381 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:12.381 could not add new controller: failed to write to nvme-fabrics device 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.381 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:13.756 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:13.756 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:13.756 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.756 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:13.756 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:15.655 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:15.655 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:15.655 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.655 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.656 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.656 [2024-12-10 14:15:16.393696] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.914 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.914 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:15.914 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.914 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.914 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.914 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.914 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.914 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.914 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.914 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:16.875 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:16.875 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:16.875 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.875 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:16.875 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:18.823 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:18.823 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:18.823 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.823 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:18.823 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.823 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:18.823 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.082 [2024-12-10 14:15:19.666948] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.082 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:20.457 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:20.457 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:20.457 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:20.457 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:20.457 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:22.359 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:22.359 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:22.359 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.359 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:22.359 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.359 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:22.359 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.359 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.359 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:22.359 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:22.359 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.359 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:22.359 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.359 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:22.359 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.360 [2024-12-10 14:15:22.967245] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.360 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:23.735 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:23.735 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:23.735 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.735 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:23.735 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.636 [2024-12-10 14:15:26.276977] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.636 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.011 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:27.011 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:27.011 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.011 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:27.011 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:28.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.915 [2024-12-10 14:15:29.590965] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.915 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:30.290 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:30.290 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:30.290 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:30.290 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:30.290 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.191 [2024-12-10 14:15:32.903072] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.191 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.450 [2024-12-10 14:15:32.951171] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.450 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.450 [2024-12-10 14:15:32.999308] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.450 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.450 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:32.450 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.450 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.450 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.450 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:32.450 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.450 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.450 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.450 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.450 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.450 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.450 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.450 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.451 [2024-12-10 14:15:33.047485] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.451 [2024-12-10 14:15:33.095635] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:32.451 "tick_rate": 2100000000, 00:13:32.451 "poll_groups": [ 00:13:32.451 { 00:13:32.451 "name": "nvmf_tgt_poll_group_000", 00:13:32.451 "admin_qpairs": 2, 00:13:32.451 "io_qpairs": 168, 00:13:32.451 "current_admin_qpairs": 0, 00:13:32.451 "current_io_qpairs": 0, 00:13:32.451 "pending_bdev_io": 0, 00:13:32.451 "completed_nvme_io": 218, 00:13:32.451 "transports": [ 00:13:32.451 { 00:13:32.451 "trtype": "TCP" 00:13:32.451 } 00:13:32.451 ] 00:13:32.451 }, 00:13:32.451 { 00:13:32.451 "name": "nvmf_tgt_poll_group_001", 00:13:32.451 "admin_qpairs": 2, 00:13:32.451 "io_qpairs": 168, 00:13:32.451 "current_admin_qpairs": 0, 00:13:32.451 "current_io_qpairs": 0, 00:13:32.451 "pending_bdev_io": 0, 00:13:32.451 "completed_nvme_io": 268, 00:13:32.451 "transports": [ 00:13:32.451 { 00:13:32.451 "trtype": "TCP" 00:13:32.451 } 00:13:32.451 ] 00:13:32.451 }, 00:13:32.451 { 00:13:32.451 "name": "nvmf_tgt_poll_group_002", 00:13:32.451 "admin_qpairs": 1, 00:13:32.451 "io_qpairs": 168, 00:13:32.451 "current_admin_qpairs": 0, 00:13:32.451 "current_io_qpairs": 0, 00:13:32.451 "pending_bdev_io": 0, 00:13:32.451 "completed_nvme_io": 268, 00:13:32.451 "transports": [ 00:13:32.451 { 00:13:32.451 "trtype": "TCP" 00:13:32.451 } 00:13:32.451 ] 00:13:32.451 }, 00:13:32.451 { 00:13:32.451 "name": "nvmf_tgt_poll_group_003", 00:13:32.451 "admin_qpairs": 2, 00:13:32.451 "io_qpairs": 168, 00:13:32.451 "current_admin_qpairs": 0, 00:13:32.451 "current_io_qpairs": 0, 00:13:32.451 "pending_bdev_io": 0, 00:13:32.451 "completed_nvme_io": 268, 00:13:32.451 "transports": [ 00:13:32.451 { 00:13:32.451 "trtype": "TCP" 00:13:32.451 } 00:13:32.451 ] 00:13:32.451 } 00:13:32.451 ] 00:13:32.451 }' 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:32.451 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:32.710 rmmod nvme_tcp 00:13:32.710 rmmod nvme_fabrics 00:13:32.710 rmmod nvme_keyring 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1574014 ']' 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1574014 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1574014 ']' 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1574014 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1574014 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1574014' 00:13:32.710 killing process with pid 1574014 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1574014 00:13:32.710 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1574014 00:13:32.969 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:32.969 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:32.969 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:32.969 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:32.969 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:32.969 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:32.969 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:32.969 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:32.969 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:32.969 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.969 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.969 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.872 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:35.130 00:13:35.130 real 0m33.557s 00:13:35.130 user 1m39.013s 00:13:35.130 sys 0m7.150s 00:13:35.130 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.130 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 ************************************ 00:13:35.130 END TEST nvmf_rpc 00:13:35.130 ************************************ 00:13:35.130 14:15:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:35.130 14:15:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:35.130 14:15:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.130 14:15:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 ************************************ 00:13:35.130 START TEST nvmf_invalid 00:13:35.130 ************************************ 00:13:35.130 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:35.130 * Looking for test storage... 00:13:35.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.130 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:35.130 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:13:35.130 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:35.130 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:35.130 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.130 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:35.131 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:35.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.389 --rc genhtml_branch_coverage=1 00:13:35.389 --rc genhtml_function_coverage=1 00:13:35.389 --rc genhtml_legend=1 00:13:35.389 --rc geninfo_all_blocks=1 00:13:35.389 --rc geninfo_unexecuted_blocks=1 00:13:35.389 00:13:35.389 ' 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:35.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.389 --rc genhtml_branch_coverage=1 00:13:35.389 --rc genhtml_function_coverage=1 00:13:35.389 --rc genhtml_legend=1 00:13:35.389 --rc geninfo_all_blocks=1 00:13:35.389 --rc geninfo_unexecuted_blocks=1 00:13:35.389 00:13:35.389 ' 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:35.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.389 --rc genhtml_branch_coverage=1 00:13:35.389 --rc genhtml_function_coverage=1 00:13:35.389 --rc genhtml_legend=1 00:13:35.389 --rc geninfo_all_blocks=1 00:13:35.389 --rc geninfo_unexecuted_blocks=1 00:13:35.389 00:13:35.389 ' 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:35.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.389 --rc genhtml_branch_coverage=1 00:13:35.389 --rc genhtml_function_coverage=1 00:13:35.389 --rc genhtml_legend=1 00:13:35.389 --rc geninfo_all_blocks=1 00:13:35.389 --rc geninfo_unexecuted_blocks=1 00:13:35.389 00:13:35.389 ' 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.389 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:35.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:35.390 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:41.959 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:41.959 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:41.959 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:41.960 Found net devices under 0000:af:00.0: cvl_0_0 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:41.960 Found net devices under 0000:af:00.1: cvl_0_1 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:41.960 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:42.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:13:42.219 00:13:42.219 --- 10.0.0.2 ping statistics --- 00:13:42.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.219 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:13:42.219 00:13:42.219 --- 10.0.0.1 ping statistics --- 00:13:42.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.219 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1582066 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1582066 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1582066 ']' 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.219 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:42.219 [2024-12-10 14:15:42.826130] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:13:42.219 [2024-12-10 14:15:42.826172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.219 [2024-12-10 14:15:42.907409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:42.219 [2024-12-10 14:15:42.948644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.219 [2024-12-10 14:15:42.948678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.219 [2024-12-10 14:15:42.948685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.219 [2024-12-10 14:15:42.948691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.219 [2024-12-10 14:15:42.948696] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.219 [2024-12-10 14:15:42.950054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.219 [2024-12-10 14:15:42.950164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.219 [2024-12-10 14:15:42.950275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.219 [2024-12-10 14:15:42.950275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.154 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.154 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:43.154 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:43.154 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:43.154 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:43.154 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.154 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:43.154 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29274 00:13:43.154 [2024-12-10 14:15:43.870275] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:43.413 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:43.413 { 00:13:43.413 "nqn": "nqn.2016-06.io.spdk:cnode29274", 00:13:43.413 "tgt_name": "foobar", 00:13:43.413 "method": "nvmf_create_subsystem", 00:13:43.413 "req_id": 1 00:13:43.413 } 00:13:43.413 Got JSON-RPC error response 00:13:43.413 response: 00:13:43.413 { 00:13:43.413 "code": -32603, 00:13:43.413 "message": "Unable to find target foobar" 00:13:43.413 }' 00:13:43.413 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:43.413 { 00:13:43.413 "nqn": "nqn.2016-06.io.spdk:cnode29274", 00:13:43.413 "tgt_name": "foobar", 00:13:43.413 "method": "nvmf_create_subsystem", 00:13:43.413 "req_id": 1 00:13:43.413 } 00:13:43.413 Got JSON-RPC error response 00:13:43.413 response: 00:13:43.413 { 00:13:43.413 "code": -32603, 00:13:43.413 "message": "Unable to find target foobar" 00:13:43.413 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:43.413 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:43.413 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15210 00:13:43.413 [2024-12-10 14:15:44.074998] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15210: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:43.413 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:43.413 { 00:13:43.413 "nqn": "nqn.2016-06.io.spdk:cnode15210", 00:13:43.413 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:43.413 "method": "nvmf_create_subsystem", 00:13:43.413 "req_id": 1 00:13:43.413 } 00:13:43.413 Got JSON-RPC error response 00:13:43.413 response: 00:13:43.413 { 00:13:43.413 "code": -32602, 00:13:43.413 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:43.413 }' 00:13:43.413 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:43.413 { 00:13:43.413 "nqn": "nqn.2016-06.io.spdk:cnode15210", 00:13:43.413 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:43.413 "method": "nvmf_create_subsystem", 00:13:43.413 "req_id": 1 00:13:43.413 } 00:13:43.413 Got JSON-RPC error response 00:13:43.413 response: 00:13:43.413 { 00:13:43.413 "code": -32602, 00:13:43.413 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:43.413 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:43.413 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:43.413 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode744 00:13:43.672 [2024-12-10 14:15:44.279626] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode744: invalid model number 'SPDK_Controller' 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:43.672 { 00:13:43.672 "nqn": "nqn.2016-06.io.spdk:cnode744", 00:13:43.672 "model_number": "SPDK_Controller\u001f", 00:13:43.672 "method": "nvmf_create_subsystem", 00:13:43.672 "req_id": 1 00:13:43.672 } 00:13:43.672 Got JSON-RPC error response 00:13:43.672 response: 00:13:43.672 { 00:13:43.672 "code": -32602, 00:13:43.672 "message": "Invalid MN SPDK_Controller\u001f" 00:13:43.672 }' 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:43.672 { 00:13:43.672 "nqn": "nqn.2016-06.io.spdk:cnode744", 00:13:43.672 "model_number": "SPDK_Controller\u001f", 00:13:43.672 "method": "nvmf_create_subsystem", 00:13:43.672 "req_id": 1 00:13:43.672 } 00:13:43.672 Got JSON-RPC error response 00:13:43.672 response: 00:13:43.672 { 00:13:43.672 "code": -32602, 00:13:43.672 "message": "Invalid MN SPDK_Controller\u001f" 00:13:43.672 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.672 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:43.673 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ == \- ]] 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ' TsZhBDzMV.u6Q*#{}3#' 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ' TsZhBDzMV.u6Q*#{}3#' nqn.2016-06.io.spdk:cnode27940 00:13:43.932 [2024-12-10 14:15:44.628770] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27940: invalid serial number ' TsZhBDzMV.u6Q*#{}3#' 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:43.932 { 00:13:43.932 "nqn": "nqn.2016-06.io.spdk:cnode27940", 00:13:43.932 "serial_number": " TsZhBDzMV.u6Q*\u007f#{}3#", 00:13:43.932 "method": "nvmf_create_subsystem", 00:13:43.932 "req_id": 1 00:13:43.932 } 00:13:43.932 Got JSON-RPC error response 00:13:43.932 response: 00:13:43.932 { 00:13:43.932 "code": -32602, 00:13:43.932 "message": "Invalid SN TsZhBDzMV.u6Q*\u007f#{}3#" 00:13:43.932 }' 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:43.932 { 00:13:43.932 "nqn": "nqn.2016-06.io.spdk:cnode27940", 00:13:43.932 "serial_number": " TsZhBDzMV.u6Q*\u007f#{}3#", 00:13:43.932 "method": "nvmf_create_subsystem", 00:13:43.932 "req_id": 1 00:13:43.932 } 00:13:43.932 Got JSON-RPC error response 00:13:43.932 response: 00:13:43.932 { 00:13:43.932 "code": -32602, 00:13:43.932 "message": "Invalid SN TsZhBDzMV.u6Q*\u007f#{}3#" 00:13:43.932 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.932 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:44.192 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:44.192 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:44.192 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.192 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.192 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:44.192 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:44.192 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:44.192 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.192 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.192 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:44.192 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:44.192 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:44.192 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:44.193 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.194 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.195 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.454 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:44.454 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:44.454 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:44.454 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.454 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.454 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ! == \- ]] 00:13:44.454 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '!r7Qk|Vy \_(L-c:B|3[~]Byi1!-H' 00:13:44.454 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '!r7Qk|Vy \_(L-c:B|3[~]Byi1!-H' nqn.2016-06.io.spdk:cnode31578 00:13:44.454 [2024-12-10 14:15:45.110334] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31578: invalid model number '!r7Qk|Vy \_(L-c:B|3[~]Byi1!-H' 00:13:44.454 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:44.454 { 00:13:44.454 "nqn": "nqn.2016-06.io.spdk:cnode31578", 00:13:44.454 "model_number": "!r7Qk|Vy \\_(L\u007f-c:B|3[~]Byi1!-H", 00:13:44.454 "method": "nvmf_create_subsystem", 00:13:44.454 "req_id": 1 00:13:44.454 } 00:13:44.454 Got JSON-RPC error response 00:13:44.454 response: 00:13:44.454 { 00:13:44.454 "code": -32602, 00:13:44.454 "message": "Invalid MN !r7Qk|Vy \\_(L\u007f-c:B|3[~]Byi1!-H" 00:13:44.454 }' 00:13:44.454 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:44.454 { 00:13:44.454 "nqn": "nqn.2016-06.io.spdk:cnode31578", 00:13:44.454 "model_number": "!r7Qk|Vy \\_(L\u007f-c:B|3[~]Byi1!-H", 00:13:44.454 "method": "nvmf_create_subsystem", 00:13:44.454 "req_id": 1 00:13:44.454 } 00:13:44.454 Got JSON-RPC error response 00:13:44.454 response: 00:13:44.454 { 00:13:44.454 "code": -32602, 00:13:44.454 "message": "Invalid MN !r7Qk|Vy \\_(L\u007f-c:B|3[~]Byi1!-H" 00:13:44.454 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:44.454 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:44.712 [2024-12-10 14:15:45.307062] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.712 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:44.970 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:44.970 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:44.970 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:44.971 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:44.971 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:44.971 [2024-12-10 14:15:45.709671] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:45.230 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:45.230 { 00:13:45.230 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:45.230 "listen_address": { 00:13:45.230 "trtype": "tcp", 00:13:45.230 "traddr": "", 00:13:45.230 "trsvcid": "4421" 00:13:45.230 }, 00:13:45.230 "method": "nvmf_subsystem_remove_listener", 00:13:45.230 "req_id": 1 00:13:45.230 } 00:13:45.230 Got JSON-RPC error response 00:13:45.230 response: 00:13:45.230 { 00:13:45.230 "code": -32602, 00:13:45.230 "message": "Invalid parameters" 00:13:45.230 }' 00:13:45.230 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:45.230 { 00:13:45.230 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:45.230 "listen_address": { 00:13:45.230 "trtype": "tcp", 00:13:45.230 "traddr": "", 00:13:45.230 "trsvcid": "4421" 00:13:45.230 }, 00:13:45.230 "method": "nvmf_subsystem_remove_listener", 00:13:45.230 "req_id": 1 00:13:45.230 } 00:13:45.230 Got JSON-RPC error response 00:13:45.230 response: 00:13:45.230 { 00:13:45.230 "code": -32602, 00:13:45.230 "message": "Invalid parameters" 00:13:45.230 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:45.230 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28072 -i 0 00:13:45.230 [2024-12-10 14:15:45.918307] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28072: invalid cntlid range [0-65519] 00:13:45.230 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:45.230 { 00:13:45.230 "nqn": "nqn.2016-06.io.spdk:cnode28072", 00:13:45.230 "min_cntlid": 0, 00:13:45.230 "method": "nvmf_create_subsystem", 00:13:45.230 "req_id": 1 00:13:45.230 } 00:13:45.230 Got JSON-RPC error response 00:13:45.230 response: 00:13:45.230 { 00:13:45.230 "code": -32602, 00:13:45.230 "message": "Invalid cntlid range [0-65519]" 00:13:45.230 }' 00:13:45.230 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:45.230 { 00:13:45.230 "nqn": "nqn.2016-06.io.spdk:cnode28072", 00:13:45.230 "min_cntlid": 0, 00:13:45.230 "method": "nvmf_create_subsystem", 00:13:45.230 "req_id": 1 00:13:45.230 } 00:13:45.230 Got JSON-RPC error response 00:13:45.230 response: 00:13:45.230 { 00:13:45.230 "code": -32602, 00:13:45.230 "message": "Invalid cntlid range [0-65519]" 00:13:45.230 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:45.230 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32625 -i 65520 00:13:45.489 [2024-12-10 14:15:46.139081] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32625: invalid cntlid range [65520-65519] 00:13:45.489 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:45.489 { 00:13:45.489 "nqn": "nqn.2016-06.io.spdk:cnode32625", 00:13:45.489 "min_cntlid": 65520, 00:13:45.489 "method": "nvmf_create_subsystem", 00:13:45.489 "req_id": 1 00:13:45.489 } 00:13:45.489 Got JSON-RPC error response 00:13:45.489 response: 00:13:45.489 { 00:13:45.489 "code": -32602, 00:13:45.489 "message": "Invalid cntlid range [65520-65519]" 00:13:45.489 }' 00:13:45.489 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:45.489 { 00:13:45.489 "nqn": "nqn.2016-06.io.spdk:cnode32625", 00:13:45.489 "min_cntlid": 65520, 00:13:45.489 "method": "nvmf_create_subsystem", 00:13:45.489 "req_id": 1 00:13:45.489 } 00:13:45.489 Got JSON-RPC error response 00:13:45.489 response: 00:13:45.489 { 00:13:45.489 "code": -32602, 00:13:45.489 "message": "Invalid cntlid range [65520-65519]" 00:13:45.489 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:45.489 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31772 -I 0 00:13:45.748 [2024-12-10 14:15:46.339715] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31772: invalid cntlid range [1-0] 00:13:45.748 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:45.748 { 00:13:45.748 "nqn": "nqn.2016-06.io.spdk:cnode31772", 00:13:45.748 "max_cntlid": 0, 00:13:45.748 "method": "nvmf_create_subsystem", 00:13:45.748 "req_id": 1 00:13:45.748 } 00:13:45.748 Got JSON-RPC error response 00:13:45.748 response: 00:13:45.748 { 00:13:45.748 "code": -32602, 00:13:45.748 "message": "Invalid cntlid range [1-0]" 00:13:45.748 }' 00:13:45.748 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:45.748 { 00:13:45.748 "nqn": "nqn.2016-06.io.spdk:cnode31772", 00:13:45.748 "max_cntlid": 0, 00:13:45.748 "method": "nvmf_create_subsystem", 00:13:45.748 "req_id": 1 00:13:45.748 } 00:13:45.748 Got JSON-RPC error response 00:13:45.748 response: 00:13:45.748 { 00:13:45.748 "code": -32602, 00:13:45.748 "message": "Invalid cntlid range [1-0]" 00:13:45.748 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:45.748 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8623 -I 65520 00:13:46.006 [2024-12-10 14:15:46.536389] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8623: invalid cntlid range [1-65520] 00:13:46.006 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:46.006 { 00:13:46.006 "nqn": "nqn.2016-06.io.spdk:cnode8623", 00:13:46.006 "max_cntlid": 65520, 00:13:46.006 "method": "nvmf_create_subsystem", 00:13:46.006 "req_id": 1 00:13:46.006 } 00:13:46.006 Got JSON-RPC error response 00:13:46.006 response: 00:13:46.006 { 00:13:46.006 "code": -32602, 00:13:46.006 "message": "Invalid cntlid range [1-65520]" 00:13:46.006 }' 00:13:46.006 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:46.006 { 00:13:46.006 "nqn": "nqn.2016-06.io.spdk:cnode8623", 00:13:46.006 "max_cntlid": 65520, 00:13:46.006 "method": "nvmf_create_subsystem", 00:13:46.006 "req_id": 1 00:13:46.006 } 00:13:46.006 Got JSON-RPC error response 00:13:46.006 response: 00:13:46.006 { 00:13:46.006 "code": -32602, 00:13:46.006 "message": "Invalid cntlid range [1-65520]" 00:13:46.006 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:46.006 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23230 -i 6 -I 5 00:13:46.006 [2024-12-10 14:15:46.733050] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23230: invalid cntlid range [6-5] 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:46.264 { 00:13:46.264 "nqn": "nqn.2016-06.io.spdk:cnode23230", 00:13:46.264 "min_cntlid": 6, 00:13:46.264 "max_cntlid": 5, 00:13:46.264 "method": "nvmf_create_subsystem", 00:13:46.264 "req_id": 1 00:13:46.264 } 00:13:46.264 Got JSON-RPC error response 00:13:46.264 response: 00:13:46.264 { 00:13:46.264 "code": -32602, 00:13:46.264 "message": "Invalid cntlid range [6-5]" 00:13:46.264 }' 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:46.264 { 00:13:46.264 "nqn": "nqn.2016-06.io.spdk:cnode23230", 00:13:46.264 "min_cntlid": 6, 00:13:46.264 "max_cntlid": 5, 00:13:46.264 "method": "nvmf_create_subsystem", 00:13:46.264 "req_id": 1 00:13:46.264 } 00:13:46.264 Got JSON-RPC error response 00:13:46.264 response: 00:13:46.264 { 00:13:46.264 "code": -32602, 00:13:46.264 "message": "Invalid cntlid range [6-5]" 00:13:46.264 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:46.264 { 00:13:46.264 "name": "foobar", 00:13:46.264 "method": "nvmf_delete_target", 00:13:46.264 "req_id": 1 00:13:46.264 } 00:13:46.264 Got JSON-RPC error response 00:13:46.264 response: 00:13:46.264 { 00:13:46.264 "code": -32602, 00:13:46.264 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:46.264 }' 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:46.264 { 00:13:46.264 "name": "foobar", 00:13:46.264 "method": "nvmf_delete_target", 00:13:46.264 "req_id": 1 00:13:46.264 } 00:13:46.264 Got JSON-RPC error response 00:13:46.264 response: 00:13:46.264 { 00:13:46.264 "code": -32602, 00:13:46.264 "message": "The specified target doesn't exist, cannot delete it." 00:13:46.264 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:46.264 rmmod nvme_tcp 00:13:46.264 rmmod nvme_fabrics 00:13:46.264 rmmod nvme_keyring 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1582066 ']' 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1582066 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1582066 ']' 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1582066 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.264 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1582066 00:13:46.264 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:46.264 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:46.264 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1582066' 00:13:46.264 killing process with pid 1582066 00:13:46.264 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1582066 00:13:46.264 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1582066 00:13:46.523 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:46.523 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:46.523 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:46.523 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:46.523 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:46.523 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:46.523 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:46.523 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:46.523 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:46.523 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.523 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.523 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:49.060 00:13:49.060 real 0m13.542s 00:13:49.060 user 0m21.420s 00:13:49.060 sys 0m6.089s 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:49.060 ************************************ 00:13:49.060 END TEST nvmf_invalid 00:13:49.060 ************************************ 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:49.060 ************************************ 00:13:49.060 START TEST nvmf_connect_stress 00:13:49.060 ************************************ 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:49.060 * Looking for test storage... 00:13:49.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:49.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.060 --rc genhtml_branch_coverage=1 00:13:49.060 --rc genhtml_function_coverage=1 00:13:49.060 --rc genhtml_legend=1 00:13:49.060 --rc geninfo_all_blocks=1 00:13:49.060 --rc geninfo_unexecuted_blocks=1 00:13:49.060 00:13:49.060 ' 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:49.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.060 --rc genhtml_branch_coverage=1 00:13:49.060 --rc genhtml_function_coverage=1 00:13:49.060 --rc genhtml_legend=1 00:13:49.060 --rc geninfo_all_blocks=1 00:13:49.060 --rc geninfo_unexecuted_blocks=1 00:13:49.060 00:13:49.060 ' 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:49.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.060 --rc genhtml_branch_coverage=1 00:13:49.060 --rc genhtml_function_coverage=1 00:13:49.060 --rc genhtml_legend=1 00:13:49.060 --rc geninfo_all_blocks=1 00:13:49.060 --rc geninfo_unexecuted_blocks=1 00:13:49.060 00:13:49.060 ' 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:49.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.060 --rc genhtml_branch_coverage=1 00:13:49.060 --rc genhtml_function_coverage=1 00:13:49.060 --rc genhtml_legend=1 00:13:49.060 --rc geninfo_all_blocks=1 00:13:49.060 --rc geninfo_unexecuted_blocks=1 00:13:49.060 00:13:49.060 ' 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.060 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:49.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:49.061 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:55.637 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:55.637 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:55.637 Found net devices under 0000:af:00.0: cvl_0_0 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:55.637 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:55.638 Found net devices under 0000:af:00.1: cvl_0_1 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:55.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:13:55.638 00:13:55.638 --- 10.0.0.2 ping statistics --- 00:13:55.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.638 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:13:55.638 00:13:55.638 --- 10.0.0.1 ping statistics --- 00:13:55.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.638 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1586923 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1586923 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1586923 ']' 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:55.638 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.897 [2024-12-10 14:15:56.399492] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:13:55.897 [2024-12-10 14:15:56.399541] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.897 [2024-12-10 14:15:56.482413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:55.897 [2024-12-10 14:15:56.520137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.897 [2024-12-10 14:15:56.520173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.897 [2024-12-10 14:15:56.520181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.897 [2024-12-10 14:15:56.520187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.897 [2024-12-10 14:15:56.520191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.897 [2024-12-10 14:15:56.521581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.897 [2024-12-10 14:15:56.521669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.897 [2024-12-10 14:15:56.521669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:55.897 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.897 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:55.897 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:55.897 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:55.897 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.156 [2024-12-10 14:15:56.670133] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.156 [2024-12-10 14:15:56.690357] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.156 NULL1 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1586950 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.156 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.157 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.157 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.157 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.157 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.157 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.157 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:13:56.157 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.157 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.157 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.415 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.415 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:13:56.415 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.415 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.415 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.982 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.982 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:13:56.982 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.982 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.982 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.240 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.240 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:13:57.240 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.240 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.240 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.499 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.499 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:13:57.499 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.499 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.499 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.757 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.757 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:13:57.757 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.757 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.757 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.015 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.015 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:13:58.015 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.015 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.015 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.582 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.582 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:13:58.582 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.582 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.582 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.840 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.840 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:13:58.840 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.840 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.840 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.098 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.098 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:13:59.098 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.098 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.098 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.356 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.356 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:13:59.356 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.356 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.356 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.923 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.923 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:13:59.923 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.923 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.923 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.181 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.181 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:00.181 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.181 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.181 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.439 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.439 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:00.439 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.439 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.439 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.698 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.698 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:00.698 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.698 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.698 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.955 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.955 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:00.955 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.955 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.955 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.521 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.521 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:01.521 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.521 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.521 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.780 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.780 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:01.780 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.780 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.780 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.038 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.038 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:02.038 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.038 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.038 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.296 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.296 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:02.296 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.296 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.296 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.555 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.555 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:02.555 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.555 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.555 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.122 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.122 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:03.122 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.122 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.122 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.380 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.380 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:03.380 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.380 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.380 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.639 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.639 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:03.639 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.639 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.639 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.897 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.897 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:03.897 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.897 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.897 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.465 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.465 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:04.465 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.465 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.465 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.724 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.724 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:04.724 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.724 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.724 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.982 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.982 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:04.982 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.982 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.982 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.241 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.241 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:05.241 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.241 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.241 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.499 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.499 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:05.499 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.499 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.499 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.066 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.066 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:06.066 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.066 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.066 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.325 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1586950 00:14:06.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1586950) - No such process 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1586950 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:06.325 rmmod nvme_tcp 00:14:06.325 rmmod nvme_fabrics 00:14:06.325 rmmod nvme_keyring 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1586923 ']' 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1586923 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1586923 ']' 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1586923 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1586923 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1586923' 00:14:06.325 killing process with pid 1586923 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1586923 00:14:06.325 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1586923 00:14:06.585 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:06.585 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:06.585 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:06.585 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:06.585 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:06.585 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:06.585 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:06.585 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:06.585 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:06.585 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.585 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.585 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.544 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:08.544 00:14:08.544 real 0m19.924s 00:14:08.544 user 0m39.603s 00:14:08.544 sys 0m9.332s 00:14:08.544 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.544 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.544 ************************************ 00:14:08.544 END TEST nvmf_connect_stress 00:14:08.544 ************************************ 00:14:08.544 14:16:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:08.544 14:16:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:08.544 14:16:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.544 14:16:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:08.874 ************************************ 00:14:08.874 START TEST nvmf_fused_ordering 00:14:08.874 ************************************ 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:08.874 * Looking for test storage... 00:14:08.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:08.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.874 --rc genhtml_branch_coverage=1 00:14:08.874 --rc genhtml_function_coverage=1 00:14:08.874 --rc genhtml_legend=1 00:14:08.874 --rc geninfo_all_blocks=1 00:14:08.874 --rc geninfo_unexecuted_blocks=1 00:14:08.874 00:14:08.874 ' 00:14:08.874 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:08.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.874 --rc genhtml_branch_coverage=1 00:14:08.874 --rc genhtml_function_coverage=1 00:14:08.874 --rc genhtml_legend=1 00:14:08.874 --rc geninfo_all_blocks=1 00:14:08.874 --rc geninfo_unexecuted_blocks=1 00:14:08.874 00:14:08.875 ' 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:08.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.875 --rc genhtml_branch_coverage=1 00:14:08.875 --rc genhtml_function_coverage=1 00:14:08.875 --rc genhtml_legend=1 00:14:08.875 --rc geninfo_all_blocks=1 00:14:08.875 --rc geninfo_unexecuted_blocks=1 00:14:08.875 00:14:08.875 ' 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:08.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.875 --rc genhtml_branch_coverage=1 00:14:08.875 --rc genhtml_function_coverage=1 00:14:08.875 --rc genhtml_legend=1 00:14:08.875 --rc geninfo_all_blocks=1 00:14:08.875 --rc geninfo_unexecuted_blocks=1 00:14:08.875 00:14:08.875 ' 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:08.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:08.875 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.480 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:15.481 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:15.481 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:15.481 Found net devices under 0000:af:00.0: cvl_0_0 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:15.481 Found net devices under 0000:af:00.1: cvl_0_1 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:15.481 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:15.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:14:15.741 00:14:15.741 --- 10.0.0.2 ping statistics --- 00:14:15.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.741 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:15.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:14:15.741 00:14:15.741 --- 10.0.0.1 ping statistics --- 00:14:15.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.741 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1592569 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1592569 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1592569 ']' 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.741 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.741 [2024-12-10 14:16:16.352427] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:14:15.741 [2024-12-10 14:16:16.352475] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.741 [2024-12-10 14:16:16.438443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.741 [2024-12-10 14:16:16.477559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.741 [2024-12-10 14:16:16.477593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.741 [2024-12-10 14:16:16.477600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.741 [2024-12-10 14:16:16.477606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.741 [2024-12-10 14:16:16.477611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.741 [2024-12-10 14:16:16.478147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.000 [2024-12-10 14:16:16.617665] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.000 [2024-12-10 14:16:16.637844] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.000 NULL1 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.000 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:16.000 [2024-12-10 14:16:16.696654] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:14:16.000 [2024-12-10 14:16:16.696684] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592589 ] 00:14:16.568 Attached to nqn.2016-06.io.spdk:cnode1 00:14:16.568 Namespace ID: 1 size: 1GB 00:14:16.568 fused_ordering(0) 00:14:16.568 fused_ordering(1) 00:14:16.568 fused_ordering(2) 00:14:16.568 fused_ordering(3) 00:14:16.568 fused_ordering(4) 00:14:16.568 fused_ordering(5) 00:14:16.568 fused_ordering(6) 00:14:16.568 fused_ordering(7) 00:14:16.568 fused_ordering(8) 00:14:16.568 fused_ordering(9) 00:14:16.568 fused_ordering(10) 00:14:16.568 fused_ordering(11) 00:14:16.568 fused_ordering(12) 00:14:16.568 fused_ordering(13) 00:14:16.568 fused_ordering(14) 00:14:16.568 fused_ordering(15) 00:14:16.568 fused_ordering(16) 00:14:16.568 fused_ordering(17) 00:14:16.568 fused_ordering(18) 00:14:16.568 fused_ordering(19) 00:14:16.568 fused_ordering(20) 00:14:16.568 fused_ordering(21) 00:14:16.568 fused_ordering(22) 00:14:16.568 fused_ordering(23) 00:14:16.568 fused_ordering(24) 00:14:16.568 fused_ordering(25) 00:14:16.568 fused_ordering(26) 00:14:16.568 fused_ordering(27) 00:14:16.568 fused_ordering(28) 00:14:16.568 fused_ordering(29) 00:14:16.568 fused_ordering(30) 00:14:16.568 fused_ordering(31) 00:14:16.568 fused_ordering(32) 00:14:16.568 fused_ordering(33) 00:14:16.568 fused_ordering(34) 00:14:16.568 fused_ordering(35) 00:14:16.568 fused_ordering(36) 00:14:16.568 fused_ordering(37) 00:14:16.568 fused_ordering(38) 00:14:16.568 fused_ordering(39) 00:14:16.568 fused_ordering(40) 00:14:16.568 fused_ordering(41) 00:14:16.568 fused_ordering(42) 00:14:16.568 fused_ordering(43) 00:14:16.568 fused_ordering(44) 00:14:16.568 fused_ordering(45) 00:14:16.568 fused_ordering(46) 00:14:16.568 fused_ordering(47) 00:14:16.568 fused_ordering(48) 00:14:16.568 fused_ordering(49) 00:14:16.568 fused_ordering(50) 00:14:16.568 fused_ordering(51) 00:14:16.568 fused_ordering(52) 00:14:16.568 fused_ordering(53) 00:14:16.568 fused_ordering(54) 00:14:16.568 fused_ordering(55) 00:14:16.568 fused_ordering(56) 00:14:16.568 fused_ordering(57) 00:14:16.568 fused_ordering(58) 00:14:16.568 fused_ordering(59) 00:14:16.568 fused_ordering(60) 00:14:16.568 fused_ordering(61) 00:14:16.568 fused_ordering(62) 00:14:16.568 fused_ordering(63) 00:14:16.568 fused_ordering(64) 00:14:16.568 fused_ordering(65) 00:14:16.568 fused_ordering(66) 00:14:16.568 fused_ordering(67) 00:14:16.568 fused_ordering(68) 00:14:16.568 fused_ordering(69) 00:14:16.568 fused_ordering(70) 00:14:16.568 fused_ordering(71) 00:14:16.568 fused_ordering(72) 00:14:16.568 fused_ordering(73) 00:14:16.568 fused_ordering(74) 00:14:16.568 fused_ordering(75) 00:14:16.568 fused_ordering(76) 00:14:16.568 fused_ordering(77) 00:14:16.568 fused_ordering(78) 00:14:16.568 fused_ordering(79) 00:14:16.568 fused_ordering(80) 00:14:16.568 fused_ordering(81) 00:14:16.568 fused_ordering(82) 00:14:16.568 fused_ordering(83) 00:14:16.568 fused_ordering(84) 00:14:16.568 fused_ordering(85) 00:14:16.568 fused_ordering(86) 00:14:16.568 fused_ordering(87) 00:14:16.568 fused_ordering(88) 00:14:16.568 fused_ordering(89) 00:14:16.568 fused_ordering(90) 00:14:16.568 fused_ordering(91) 00:14:16.568 fused_ordering(92) 00:14:16.568 fused_ordering(93) 00:14:16.568 fused_ordering(94) 00:14:16.568 fused_ordering(95) 00:14:16.568 fused_ordering(96) 00:14:16.568 fused_ordering(97) 00:14:16.568 fused_ordering(98) 00:14:16.568 fused_ordering(99) 00:14:16.568 fused_ordering(100) 00:14:16.568 fused_ordering(101) 00:14:16.568 fused_ordering(102) 00:14:16.568 fused_ordering(103) 00:14:16.568 fused_ordering(104) 00:14:16.568 fused_ordering(105) 00:14:16.568 fused_ordering(106) 00:14:16.568 fused_ordering(107) 00:14:16.568 fused_ordering(108) 00:14:16.568 fused_ordering(109) 00:14:16.568 fused_ordering(110) 00:14:16.568 fused_ordering(111) 00:14:16.568 fused_ordering(112) 00:14:16.568 fused_ordering(113) 00:14:16.568 fused_ordering(114) 00:14:16.568 fused_ordering(115) 00:14:16.568 fused_ordering(116) 00:14:16.568 fused_ordering(117) 00:14:16.568 fused_ordering(118) 00:14:16.568 fused_ordering(119) 00:14:16.568 fused_ordering(120) 00:14:16.568 fused_ordering(121) 00:14:16.568 fused_ordering(122) 00:14:16.568 fused_ordering(123) 00:14:16.568 fused_ordering(124) 00:14:16.568 fused_ordering(125) 00:14:16.568 fused_ordering(126) 00:14:16.568 fused_ordering(127) 00:14:16.568 fused_ordering(128) 00:14:16.568 fused_ordering(129) 00:14:16.568 fused_ordering(130) 00:14:16.568 fused_ordering(131) 00:14:16.568 fused_ordering(132) 00:14:16.568 fused_ordering(133) 00:14:16.568 fused_ordering(134) 00:14:16.568 fused_ordering(135) 00:14:16.568 fused_ordering(136) 00:14:16.568 fused_ordering(137) 00:14:16.568 fused_ordering(138) 00:14:16.568 fused_ordering(139) 00:14:16.568 fused_ordering(140) 00:14:16.568 fused_ordering(141) 00:14:16.568 fused_ordering(142) 00:14:16.568 fused_ordering(143) 00:14:16.568 fused_ordering(144) 00:14:16.568 fused_ordering(145) 00:14:16.568 fused_ordering(146) 00:14:16.568 fused_ordering(147) 00:14:16.568 fused_ordering(148) 00:14:16.568 fused_ordering(149) 00:14:16.568 fused_ordering(150) 00:14:16.568 fused_ordering(151) 00:14:16.568 fused_ordering(152) 00:14:16.568 fused_ordering(153) 00:14:16.568 fused_ordering(154) 00:14:16.568 fused_ordering(155) 00:14:16.568 fused_ordering(156) 00:14:16.568 fused_ordering(157) 00:14:16.568 fused_ordering(158) 00:14:16.568 fused_ordering(159) 00:14:16.568 fused_ordering(160) 00:14:16.568 fused_ordering(161) 00:14:16.568 fused_ordering(162) 00:14:16.568 fused_ordering(163) 00:14:16.568 fused_ordering(164) 00:14:16.568 fused_ordering(165) 00:14:16.568 fused_ordering(166) 00:14:16.568 fused_ordering(167) 00:14:16.568 fused_ordering(168) 00:14:16.568 fused_ordering(169) 00:14:16.568 fused_ordering(170) 00:14:16.568 fused_ordering(171) 00:14:16.568 fused_ordering(172) 00:14:16.568 fused_ordering(173) 00:14:16.568 fused_ordering(174) 00:14:16.568 fused_ordering(175) 00:14:16.568 fused_ordering(176) 00:14:16.568 fused_ordering(177) 00:14:16.568 fused_ordering(178) 00:14:16.568 fused_ordering(179) 00:14:16.568 fused_ordering(180) 00:14:16.568 fused_ordering(181) 00:14:16.568 fused_ordering(182) 00:14:16.568 fused_ordering(183) 00:14:16.568 fused_ordering(184) 00:14:16.568 fused_ordering(185) 00:14:16.568 fused_ordering(186) 00:14:16.568 fused_ordering(187) 00:14:16.568 fused_ordering(188) 00:14:16.568 fused_ordering(189) 00:14:16.568 fused_ordering(190) 00:14:16.568 fused_ordering(191) 00:14:16.568 fused_ordering(192) 00:14:16.568 fused_ordering(193) 00:14:16.568 fused_ordering(194) 00:14:16.568 fused_ordering(195) 00:14:16.568 fused_ordering(196) 00:14:16.568 fused_ordering(197) 00:14:16.568 fused_ordering(198) 00:14:16.568 fused_ordering(199) 00:14:16.568 fused_ordering(200) 00:14:16.568 fused_ordering(201) 00:14:16.568 fused_ordering(202) 00:14:16.568 fused_ordering(203) 00:14:16.568 fused_ordering(204) 00:14:16.568 fused_ordering(205) 00:14:16.827 fused_ordering(206) 00:14:16.827 fused_ordering(207) 00:14:16.827 fused_ordering(208) 00:14:16.827 fused_ordering(209) 00:14:16.827 fused_ordering(210) 00:14:16.827 fused_ordering(211) 00:14:16.827 fused_ordering(212) 00:14:16.827 fused_ordering(213) 00:14:16.827 fused_ordering(214) 00:14:16.827 fused_ordering(215) 00:14:16.827 fused_ordering(216) 00:14:16.827 fused_ordering(217) 00:14:16.827 fused_ordering(218) 00:14:16.827 fused_ordering(219) 00:14:16.827 fused_ordering(220) 00:14:16.827 fused_ordering(221) 00:14:16.827 fused_ordering(222) 00:14:16.827 fused_ordering(223) 00:14:16.827 fused_ordering(224) 00:14:16.827 fused_ordering(225) 00:14:16.827 fused_ordering(226) 00:14:16.827 fused_ordering(227) 00:14:16.827 fused_ordering(228) 00:14:16.827 fused_ordering(229) 00:14:16.827 fused_ordering(230) 00:14:16.827 fused_ordering(231) 00:14:16.827 fused_ordering(232) 00:14:16.827 fused_ordering(233) 00:14:16.827 fused_ordering(234) 00:14:16.827 fused_ordering(235) 00:14:16.827 fused_ordering(236) 00:14:16.827 fused_ordering(237) 00:14:16.827 fused_ordering(238) 00:14:16.827 fused_ordering(239) 00:14:16.827 fused_ordering(240) 00:14:16.827 fused_ordering(241) 00:14:16.827 fused_ordering(242) 00:14:16.827 fused_ordering(243) 00:14:16.827 fused_ordering(244) 00:14:16.827 fused_ordering(245) 00:14:16.827 fused_ordering(246) 00:14:16.827 fused_ordering(247) 00:14:16.827 fused_ordering(248) 00:14:16.827 fused_ordering(249) 00:14:16.827 fused_ordering(250) 00:14:16.827 fused_ordering(251) 00:14:16.827 fused_ordering(252) 00:14:16.827 fused_ordering(253) 00:14:16.827 fused_ordering(254) 00:14:16.827 fused_ordering(255) 00:14:16.827 fused_ordering(256) 00:14:16.827 fused_ordering(257) 00:14:16.827 fused_ordering(258) 00:14:16.827 fused_ordering(259) 00:14:16.827 fused_ordering(260) 00:14:16.827 fused_ordering(261) 00:14:16.827 fused_ordering(262) 00:14:16.827 fused_ordering(263) 00:14:16.827 fused_ordering(264) 00:14:16.827 fused_ordering(265) 00:14:16.827 fused_ordering(266) 00:14:16.827 fused_ordering(267) 00:14:16.827 fused_ordering(268) 00:14:16.827 fused_ordering(269) 00:14:16.827 fused_ordering(270) 00:14:16.827 fused_ordering(271) 00:14:16.827 fused_ordering(272) 00:14:16.827 fused_ordering(273) 00:14:16.827 fused_ordering(274) 00:14:16.827 fused_ordering(275) 00:14:16.827 fused_ordering(276) 00:14:16.827 fused_ordering(277) 00:14:16.827 fused_ordering(278) 00:14:16.827 fused_ordering(279) 00:14:16.827 fused_ordering(280) 00:14:16.827 fused_ordering(281) 00:14:16.827 fused_ordering(282) 00:14:16.827 fused_ordering(283) 00:14:16.827 fused_ordering(284) 00:14:16.827 fused_ordering(285) 00:14:16.827 fused_ordering(286) 00:14:16.827 fused_ordering(287) 00:14:16.827 fused_ordering(288) 00:14:16.827 fused_ordering(289) 00:14:16.827 fused_ordering(290) 00:14:16.827 fused_ordering(291) 00:14:16.827 fused_ordering(292) 00:14:16.827 fused_ordering(293) 00:14:16.827 fused_ordering(294) 00:14:16.827 fused_ordering(295) 00:14:16.827 fused_ordering(296) 00:14:16.827 fused_ordering(297) 00:14:16.827 fused_ordering(298) 00:14:16.827 fused_ordering(299) 00:14:16.827 fused_ordering(300) 00:14:16.827 fused_ordering(301) 00:14:16.827 fused_ordering(302) 00:14:16.827 fused_ordering(303) 00:14:16.827 fused_ordering(304) 00:14:16.827 fused_ordering(305) 00:14:16.827 fused_ordering(306) 00:14:16.827 fused_ordering(307) 00:14:16.827 fused_ordering(308) 00:14:16.827 fused_ordering(309) 00:14:16.827 fused_ordering(310) 00:14:16.827 fused_ordering(311) 00:14:16.827 fused_ordering(312) 00:14:16.827 fused_ordering(313) 00:14:16.827 fused_ordering(314) 00:14:16.827 fused_ordering(315) 00:14:16.827 fused_ordering(316) 00:14:16.827 fused_ordering(317) 00:14:16.827 fused_ordering(318) 00:14:16.827 fused_ordering(319) 00:14:16.827 fused_ordering(320) 00:14:16.827 fused_ordering(321) 00:14:16.827 fused_ordering(322) 00:14:16.827 fused_ordering(323) 00:14:16.827 fused_ordering(324) 00:14:16.827 fused_ordering(325) 00:14:16.827 fused_ordering(326) 00:14:16.827 fused_ordering(327) 00:14:16.827 fused_ordering(328) 00:14:16.827 fused_ordering(329) 00:14:16.827 fused_ordering(330) 00:14:16.827 fused_ordering(331) 00:14:16.827 fused_ordering(332) 00:14:16.827 fused_ordering(333) 00:14:16.827 fused_ordering(334) 00:14:16.827 fused_ordering(335) 00:14:16.827 fused_ordering(336) 00:14:16.827 fused_ordering(337) 00:14:16.827 fused_ordering(338) 00:14:16.827 fused_ordering(339) 00:14:16.827 fused_ordering(340) 00:14:16.827 fused_ordering(341) 00:14:16.827 fused_ordering(342) 00:14:16.827 fused_ordering(343) 00:14:16.827 fused_ordering(344) 00:14:16.827 fused_ordering(345) 00:14:16.827 fused_ordering(346) 00:14:16.827 fused_ordering(347) 00:14:16.827 fused_ordering(348) 00:14:16.827 fused_ordering(349) 00:14:16.827 fused_ordering(350) 00:14:16.827 fused_ordering(351) 00:14:16.827 fused_ordering(352) 00:14:16.827 fused_ordering(353) 00:14:16.827 fused_ordering(354) 00:14:16.827 fused_ordering(355) 00:14:16.827 fused_ordering(356) 00:14:16.827 fused_ordering(357) 00:14:16.827 fused_ordering(358) 00:14:16.827 fused_ordering(359) 00:14:16.827 fused_ordering(360) 00:14:16.828 fused_ordering(361) 00:14:16.828 fused_ordering(362) 00:14:16.828 fused_ordering(363) 00:14:16.828 fused_ordering(364) 00:14:16.828 fused_ordering(365) 00:14:16.828 fused_ordering(366) 00:14:16.828 fused_ordering(367) 00:14:16.828 fused_ordering(368) 00:14:16.828 fused_ordering(369) 00:14:16.828 fused_ordering(370) 00:14:16.828 fused_ordering(371) 00:14:16.828 fused_ordering(372) 00:14:16.828 fused_ordering(373) 00:14:16.828 fused_ordering(374) 00:14:16.828 fused_ordering(375) 00:14:16.828 fused_ordering(376) 00:14:16.828 fused_ordering(377) 00:14:16.828 fused_ordering(378) 00:14:16.828 fused_ordering(379) 00:14:16.828 fused_ordering(380) 00:14:16.828 fused_ordering(381) 00:14:16.828 fused_ordering(382) 00:14:16.828 fused_ordering(383) 00:14:16.828 fused_ordering(384) 00:14:16.828 fused_ordering(385) 00:14:16.828 fused_ordering(386) 00:14:16.828 fused_ordering(387) 00:14:16.828 fused_ordering(388) 00:14:16.828 fused_ordering(389) 00:14:16.828 fused_ordering(390) 00:14:16.828 fused_ordering(391) 00:14:16.828 fused_ordering(392) 00:14:16.828 fused_ordering(393) 00:14:16.828 fused_ordering(394) 00:14:16.828 fused_ordering(395) 00:14:16.828 fused_ordering(396) 00:14:16.828 fused_ordering(397) 00:14:16.828 fused_ordering(398) 00:14:16.828 fused_ordering(399) 00:14:16.828 fused_ordering(400) 00:14:16.828 fused_ordering(401) 00:14:16.828 fused_ordering(402) 00:14:16.828 fused_ordering(403) 00:14:16.828 fused_ordering(404) 00:14:16.828 fused_ordering(405) 00:14:16.828 fused_ordering(406) 00:14:16.828 fused_ordering(407) 00:14:16.828 fused_ordering(408) 00:14:16.828 fused_ordering(409) 00:14:16.828 fused_ordering(410) 00:14:17.087 fused_ordering(411) 00:14:17.087 fused_ordering(412) 00:14:17.087 fused_ordering(413) 00:14:17.087 fused_ordering(414) 00:14:17.087 fused_ordering(415) 00:14:17.087 fused_ordering(416) 00:14:17.087 fused_ordering(417) 00:14:17.087 fused_ordering(418) 00:14:17.087 fused_ordering(419) 00:14:17.087 fused_ordering(420) 00:14:17.087 fused_ordering(421) 00:14:17.087 fused_ordering(422) 00:14:17.087 fused_ordering(423) 00:14:17.087 fused_ordering(424) 00:14:17.087 fused_ordering(425) 00:14:17.087 fused_ordering(426) 00:14:17.087 fused_ordering(427) 00:14:17.087 fused_ordering(428) 00:14:17.087 fused_ordering(429) 00:14:17.087 fused_ordering(430) 00:14:17.087 fused_ordering(431) 00:14:17.087 fused_ordering(432) 00:14:17.087 fused_ordering(433) 00:14:17.087 fused_ordering(434) 00:14:17.087 fused_ordering(435) 00:14:17.087 fused_ordering(436) 00:14:17.087 fused_ordering(437) 00:14:17.087 fused_ordering(438) 00:14:17.087 fused_ordering(439) 00:14:17.087 fused_ordering(440) 00:14:17.087 fused_ordering(441) 00:14:17.087 fused_ordering(442) 00:14:17.087 fused_ordering(443) 00:14:17.087 fused_ordering(444) 00:14:17.087 fused_ordering(445) 00:14:17.087 fused_ordering(446) 00:14:17.087 fused_ordering(447) 00:14:17.087 fused_ordering(448) 00:14:17.087 fused_ordering(449) 00:14:17.087 fused_ordering(450) 00:14:17.087 fused_ordering(451) 00:14:17.087 fused_ordering(452) 00:14:17.087 fused_ordering(453) 00:14:17.087 fused_ordering(454) 00:14:17.087 fused_ordering(455) 00:14:17.087 fused_ordering(456) 00:14:17.087 fused_ordering(457) 00:14:17.087 fused_ordering(458) 00:14:17.087 fused_ordering(459) 00:14:17.087 fused_ordering(460) 00:14:17.087 fused_ordering(461) 00:14:17.087 fused_ordering(462) 00:14:17.087 fused_ordering(463) 00:14:17.087 fused_ordering(464) 00:14:17.087 fused_ordering(465) 00:14:17.087 fused_ordering(466) 00:14:17.087 fused_ordering(467) 00:14:17.087 fused_ordering(468) 00:14:17.087 fused_ordering(469) 00:14:17.087 fused_ordering(470) 00:14:17.087 fused_ordering(471) 00:14:17.087 fused_ordering(472) 00:14:17.087 fused_ordering(473) 00:14:17.087 fused_ordering(474) 00:14:17.087 fused_ordering(475) 00:14:17.087 fused_ordering(476) 00:14:17.087 fused_ordering(477) 00:14:17.087 fused_ordering(478) 00:14:17.087 fused_ordering(479) 00:14:17.087 fused_ordering(480) 00:14:17.087 fused_ordering(481) 00:14:17.087 fused_ordering(482) 00:14:17.087 fused_ordering(483) 00:14:17.087 fused_ordering(484) 00:14:17.087 fused_ordering(485) 00:14:17.087 fused_ordering(486) 00:14:17.087 fused_ordering(487) 00:14:17.087 fused_ordering(488) 00:14:17.087 fused_ordering(489) 00:14:17.087 fused_ordering(490) 00:14:17.087 fused_ordering(491) 00:14:17.087 fused_ordering(492) 00:14:17.087 fused_ordering(493) 00:14:17.087 fused_ordering(494) 00:14:17.087 fused_ordering(495) 00:14:17.087 fused_ordering(496) 00:14:17.087 fused_ordering(497) 00:14:17.087 fused_ordering(498) 00:14:17.087 fused_ordering(499) 00:14:17.087 fused_ordering(500) 00:14:17.087 fused_ordering(501) 00:14:17.087 fused_ordering(502) 00:14:17.087 fused_ordering(503) 00:14:17.087 fused_ordering(504) 00:14:17.087 fused_ordering(505) 00:14:17.087 fused_ordering(506) 00:14:17.087 fused_ordering(507) 00:14:17.088 fused_ordering(508) 00:14:17.088 fused_ordering(509) 00:14:17.088 fused_ordering(510) 00:14:17.088 fused_ordering(511) 00:14:17.088 fused_ordering(512) 00:14:17.088 fused_ordering(513) 00:14:17.088 fused_ordering(514) 00:14:17.088 fused_ordering(515) 00:14:17.088 fused_ordering(516) 00:14:17.088 fused_ordering(517) 00:14:17.088 fused_ordering(518) 00:14:17.088 fused_ordering(519) 00:14:17.088 fused_ordering(520) 00:14:17.088 fused_ordering(521) 00:14:17.088 fused_ordering(522) 00:14:17.088 fused_ordering(523) 00:14:17.088 fused_ordering(524) 00:14:17.088 fused_ordering(525) 00:14:17.088 fused_ordering(526) 00:14:17.088 fused_ordering(527) 00:14:17.088 fused_ordering(528) 00:14:17.088 fused_ordering(529) 00:14:17.088 fused_ordering(530) 00:14:17.088 fused_ordering(531) 00:14:17.088 fused_ordering(532) 00:14:17.088 fused_ordering(533) 00:14:17.088 fused_ordering(534) 00:14:17.088 fused_ordering(535) 00:14:17.088 fused_ordering(536) 00:14:17.088 fused_ordering(537) 00:14:17.088 fused_ordering(538) 00:14:17.088 fused_ordering(539) 00:14:17.088 fused_ordering(540) 00:14:17.088 fused_ordering(541) 00:14:17.088 fused_ordering(542) 00:14:17.088 fused_ordering(543) 00:14:17.088 fused_ordering(544) 00:14:17.088 fused_ordering(545) 00:14:17.088 fused_ordering(546) 00:14:17.088 fused_ordering(547) 00:14:17.088 fused_ordering(548) 00:14:17.088 fused_ordering(549) 00:14:17.088 fused_ordering(550) 00:14:17.088 fused_ordering(551) 00:14:17.088 fused_ordering(552) 00:14:17.088 fused_ordering(553) 00:14:17.088 fused_ordering(554) 00:14:17.088 fused_ordering(555) 00:14:17.088 fused_ordering(556) 00:14:17.088 fused_ordering(557) 00:14:17.088 fused_ordering(558) 00:14:17.088 fused_ordering(559) 00:14:17.088 fused_ordering(560) 00:14:17.088 fused_ordering(561) 00:14:17.088 fused_ordering(562) 00:14:17.088 fused_ordering(563) 00:14:17.088 fused_ordering(564) 00:14:17.088 fused_ordering(565) 00:14:17.088 fused_ordering(566) 00:14:17.088 fused_ordering(567) 00:14:17.088 fused_ordering(568) 00:14:17.088 fused_ordering(569) 00:14:17.088 fused_ordering(570) 00:14:17.088 fused_ordering(571) 00:14:17.088 fused_ordering(572) 00:14:17.088 fused_ordering(573) 00:14:17.088 fused_ordering(574) 00:14:17.088 fused_ordering(575) 00:14:17.088 fused_ordering(576) 00:14:17.088 fused_ordering(577) 00:14:17.088 fused_ordering(578) 00:14:17.088 fused_ordering(579) 00:14:17.088 fused_ordering(580) 00:14:17.088 fused_ordering(581) 00:14:17.088 fused_ordering(582) 00:14:17.088 fused_ordering(583) 00:14:17.088 fused_ordering(584) 00:14:17.088 fused_ordering(585) 00:14:17.088 fused_ordering(586) 00:14:17.088 fused_ordering(587) 00:14:17.088 fused_ordering(588) 00:14:17.088 fused_ordering(589) 00:14:17.088 fused_ordering(590) 00:14:17.088 fused_ordering(591) 00:14:17.088 fused_ordering(592) 00:14:17.088 fused_ordering(593) 00:14:17.088 fused_ordering(594) 00:14:17.088 fused_ordering(595) 00:14:17.088 fused_ordering(596) 00:14:17.088 fused_ordering(597) 00:14:17.088 fused_ordering(598) 00:14:17.088 fused_ordering(599) 00:14:17.088 fused_ordering(600) 00:14:17.088 fused_ordering(601) 00:14:17.088 fused_ordering(602) 00:14:17.088 fused_ordering(603) 00:14:17.088 fused_ordering(604) 00:14:17.088 fused_ordering(605) 00:14:17.088 fused_ordering(606) 00:14:17.088 fused_ordering(607) 00:14:17.088 fused_ordering(608) 00:14:17.088 fused_ordering(609) 00:14:17.088 fused_ordering(610) 00:14:17.088 fused_ordering(611) 00:14:17.088 fused_ordering(612) 00:14:17.088 fused_ordering(613) 00:14:17.088 fused_ordering(614) 00:14:17.088 fused_ordering(615) 00:14:17.656 fused_ordering(616) 00:14:17.656 fused_ordering(617) 00:14:17.656 fused_ordering(618) 00:14:17.656 fused_ordering(619) 00:14:17.656 fused_ordering(620) 00:14:17.656 fused_ordering(621) 00:14:17.656 fused_ordering(622) 00:14:17.656 fused_ordering(623) 00:14:17.656 fused_ordering(624) 00:14:17.656 fused_ordering(625) 00:14:17.656 fused_ordering(626) 00:14:17.656 fused_ordering(627) 00:14:17.656 fused_ordering(628) 00:14:17.656 fused_ordering(629) 00:14:17.656 fused_ordering(630) 00:14:17.656 fused_ordering(631) 00:14:17.656 fused_ordering(632) 00:14:17.656 fused_ordering(633) 00:14:17.656 fused_ordering(634) 00:14:17.656 fused_ordering(635) 00:14:17.656 fused_ordering(636) 00:14:17.656 fused_ordering(637) 00:14:17.656 fused_ordering(638) 00:14:17.656 fused_ordering(639) 00:14:17.656 fused_ordering(640) 00:14:17.656 fused_ordering(641) 00:14:17.656 fused_ordering(642) 00:14:17.656 fused_ordering(643) 00:14:17.656 fused_ordering(644) 00:14:17.656 fused_ordering(645) 00:14:17.656 fused_ordering(646) 00:14:17.656 fused_ordering(647) 00:14:17.656 fused_ordering(648) 00:14:17.656 fused_ordering(649) 00:14:17.656 fused_ordering(650) 00:14:17.656 fused_ordering(651) 00:14:17.656 fused_ordering(652) 00:14:17.656 fused_ordering(653) 00:14:17.656 fused_ordering(654) 00:14:17.656 fused_ordering(655) 00:14:17.656 fused_ordering(656) 00:14:17.656 fused_ordering(657) 00:14:17.656 fused_ordering(658) 00:14:17.656 fused_ordering(659) 00:14:17.656 fused_ordering(660) 00:14:17.656 fused_ordering(661) 00:14:17.656 fused_ordering(662) 00:14:17.656 fused_ordering(663) 00:14:17.656 fused_ordering(664) 00:14:17.657 fused_ordering(665) 00:14:17.657 fused_ordering(666) 00:14:17.657 fused_ordering(667) 00:14:17.657 fused_ordering(668) 00:14:17.657 fused_ordering(669) 00:14:17.657 fused_ordering(670) 00:14:17.657 fused_ordering(671) 00:14:17.657 fused_ordering(672) 00:14:17.657 fused_ordering(673) 00:14:17.657 fused_ordering(674) 00:14:17.657 fused_ordering(675) 00:14:17.657 fused_ordering(676) 00:14:17.657 fused_ordering(677) 00:14:17.657 fused_ordering(678) 00:14:17.657 fused_ordering(679) 00:14:17.657 fused_ordering(680) 00:14:17.657 fused_ordering(681) 00:14:17.657 fused_ordering(682) 00:14:17.657 fused_ordering(683) 00:14:17.657 fused_ordering(684) 00:14:17.657 fused_ordering(685) 00:14:17.657 fused_ordering(686) 00:14:17.657 fused_ordering(687) 00:14:17.657 fused_ordering(688) 00:14:17.657 fused_ordering(689) 00:14:17.657 fused_ordering(690) 00:14:17.657 fused_ordering(691) 00:14:17.657 fused_ordering(692) 00:14:17.657 fused_ordering(693) 00:14:17.657 fused_ordering(694) 00:14:17.657 fused_ordering(695) 00:14:17.657 fused_ordering(696) 00:14:17.657 fused_ordering(697) 00:14:17.657 fused_ordering(698) 00:14:17.657 fused_ordering(699) 00:14:17.657 fused_ordering(700) 00:14:17.657 fused_ordering(701) 00:14:17.657 fused_ordering(702) 00:14:17.657 fused_ordering(703) 00:14:17.657 fused_ordering(704) 00:14:17.657 fused_ordering(705) 00:14:17.657 fused_ordering(706) 00:14:17.657 fused_ordering(707) 00:14:17.657 fused_ordering(708) 00:14:17.657 fused_ordering(709) 00:14:17.657 fused_ordering(710) 00:14:17.657 fused_ordering(711) 00:14:17.657 fused_ordering(712) 00:14:17.657 fused_ordering(713) 00:14:17.657 fused_ordering(714) 00:14:17.657 fused_ordering(715) 00:14:17.657 fused_ordering(716) 00:14:17.657 fused_ordering(717) 00:14:17.657 fused_ordering(718) 00:14:17.657 fused_ordering(719) 00:14:17.657 fused_ordering(720) 00:14:17.657 fused_ordering(721) 00:14:17.657 fused_ordering(722) 00:14:17.657 fused_ordering(723) 00:14:17.657 fused_ordering(724) 00:14:17.657 fused_ordering(725) 00:14:17.657 fused_ordering(726) 00:14:17.657 fused_ordering(727) 00:14:17.657 fused_ordering(728) 00:14:17.657 fused_ordering(729) 00:14:17.657 fused_ordering(730) 00:14:17.657 fused_ordering(731) 00:14:17.657 fused_ordering(732) 00:14:17.657 fused_ordering(733) 00:14:17.657 fused_ordering(734) 00:14:17.657 fused_ordering(735) 00:14:17.657 fused_ordering(736) 00:14:17.657 fused_ordering(737) 00:14:17.657 fused_ordering(738) 00:14:17.657 fused_ordering(739) 00:14:17.657 fused_ordering(740) 00:14:17.657 fused_ordering(741) 00:14:17.657 fused_ordering(742) 00:14:17.657 fused_ordering(743) 00:14:17.657 fused_ordering(744) 00:14:17.657 fused_ordering(745) 00:14:17.657 fused_ordering(746) 00:14:17.657 fused_ordering(747) 00:14:17.657 fused_ordering(748) 00:14:17.657 fused_ordering(749) 00:14:17.657 fused_ordering(750) 00:14:17.657 fused_ordering(751) 00:14:17.657 fused_ordering(752) 00:14:17.657 fused_ordering(753) 00:14:17.657 fused_ordering(754) 00:14:17.657 fused_ordering(755) 00:14:17.657 fused_ordering(756) 00:14:17.657 fused_ordering(757) 00:14:17.657 fused_ordering(758) 00:14:17.657 fused_ordering(759) 00:14:17.657 fused_ordering(760) 00:14:17.657 fused_ordering(761) 00:14:17.657 fused_ordering(762) 00:14:17.657 fused_ordering(763) 00:14:17.657 fused_ordering(764) 00:14:17.657 fused_ordering(765) 00:14:17.657 fused_ordering(766) 00:14:17.657 fused_ordering(767) 00:14:17.657 fused_ordering(768) 00:14:17.657 fused_ordering(769) 00:14:17.657 fused_ordering(770) 00:14:17.657 fused_ordering(771) 00:14:17.657 fused_ordering(772) 00:14:17.657 fused_ordering(773) 00:14:17.657 fused_ordering(774) 00:14:17.657 fused_ordering(775) 00:14:17.657 fused_ordering(776) 00:14:17.657 fused_ordering(777) 00:14:17.657 fused_ordering(778) 00:14:17.657 fused_ordering(779) 00:14:17.657 fused_ordering(780) 00:14:17.657 fused_ordering(781) 00:14:17.657 fused_ordering(782) 00:14:17.657 fused_ordering(783) 00:14:17.657 fused_ordering(784) 00:14:17.657 fused_ordering(785) 00:14:17.657 fused_ordering(786) 00:14:17.657 fused_ordering(787) 00:14:17.657 fused_ordering(788) 00:14:17.657 fused_ordering(789) 00:14:17.657 fused_ordering(790) 00:14:17.657 fused_ordering(791) 00:14:17.657 fused_ordering(792) 00:14:17.657 fused_ordering(793) 00:14:17.657 fused_ordering(794) 00:14:17.657 fused_ordering(795) 00:14:17.657 fused_ordering(796) 00:14:17.657 fused_ordering(797) 00:14:17.657 fused_ordering(798) 00:14:17.657 fused_ordering(799) 00:14:17.657 fused_ordering(800) 00:14:17.657 fused_ordering(801) 00:14:17.657 fused_ordering(802) 00:14:17.657 fused_ordering(803) 00:14:17.657 fused_ordering(804) 00:14:17.657 fused_ordering(805) 00:14:17.657 fused_ordering(806) 00:14:17.657 fused_ordering(807) 00:14:17.657 fused_ordering(808) 00:14:17.657 fused_ordering(809) 00:14:17.657 fused_ordering(810) 00:14:17.657 fused_ordering(811) 00:14:17.657 fused_ordering(812) 00:14:17.657 fused_ordering(813) 00:14:17.657 fused_ordering(814) 00:14:17.657 fused_ordering(815) 00:14:17.657 fused_ordering(816) 00:14:17.657 fused_ordering(817) 00:14:17.657 fused_ordering(818) 00:14:17.657 fused_ordering(819) 00:14:17.657 fused_ordering(820) 00:14:17.916 fused_ordering(821) 00:14:17.916 fused_ordering(822) 00:14:17.916 fused_ordering(823) 00:14:17.916 fused_ordering(824) 00:14:17.916 fused_ordering(825) 00:14:17.916 fused_ordering(826) 00:14:17.916 fused_ordering(827) 00:14:17.916 fused_ordering(828) 00:14:17.916 fused_ordering(829) 00:14:17.916 fused_ordering(830) 00:14:17.916 fused_ordering(831) 00:14:17.916 fused_ordering(832) 00:14:17.916 fused_ordering(833) 00:14:17.916 fused_ordering(834) 00:14:17.916 fused_ordering(835) 00:14:17.916 fused_ordering(836) 00:14:17.916 fused_ordering(837) 00:14:17.916 fused_ordering(838) 00:14:17.916 fused_ordering(839) 00:14:17.916 fused_ordering(840) 00:14:17.916 fused_ordering(841) 00:14:17.916 fused_ordering(842) 00:14:17.916 fused_ordering(843) 00:14:17.916 fused_ordering(844) 00:14:17.916 fused_ordering(845) 00:14:17.916 fused_ordering(846) 00:14:17.916 fused_ordering(847) 00:14:17.916 fused_ordering(848) 00:14:17.916 fused_ordering(849) 00:14:17.916 fused_ordering(850) 00:14:17.916 fused_ordering(851) 00:14:17.916 fused_ordering(852) 00:14:17.916 fused_ordering(853) 00:14:17.916 fused_ordering(854) 00:14:17.916 fused_ordering(855) 00:14:17.916 fused_ordering(856) 00:14:17.916 fused_ordering(857) 00:14:17.916 fused_ordering(858) 00:14:17.916 fused_ordering(859) 00:14:17.916 fused_ordering(860) 00:14:17.916 fused_ordering(861) 00:14:17.916 fused_ordering(862) 00:14:17.916 fused_ordering(863) 00:14:17.916 fused_ordering(864) 00:14:17.916 fused_ordering(865) 00:14:17.916 fused_ordering(866) 00:14:17.916 fused_ordering(867) 00:14:17.916 fused_ordering(868) 00:14:17.916 fused_ordering(869) 00:14:17.916 fused_ordering(870) 00:14:17.916 fused_ordering(871) 00:14:17.916 fused_ordering(872) 00:14:17.916 fused_ordering(873) 00:14:17.916 fused_ordering(874) 00:14:17.916 fused_ordering(875) 00:14:17.916 fused_ordering(876) 00:14:17.916 fused_ordering(877) 00:14:17.916 fused_ordering(878) 00:14:17.916 fused_ordering(879) 00:14:17.916 fused_ordering(880) 00:14:17.916 fused_ordering(881) 00:14:17.916 fused_ordering(882) 00:14:17.916 fused_ordering(883) 00:14:17.916 fused_ordering(884) 00:14:17.916 fused_ordering(885) 00:14:17.916 fused_ordering(886) 00:14:17.916 fused_ordering(887) 00:14:17.916 fused_ordering(888) 00:14:17.916 fused_ordering(889) 00:14:17.916 fused_ordering(890) 00:14:17.916 fused_ordering(891) 00:14:17.916 fused_ordering(892) 00:14:17.916 fused_ordering(893) 00:14:17.916 fused_ordering(894) 00:14:17.916 fused_ordering(895) 00:14:17.916 fused_ordering(896) 00:14:17.916 fused_ordering(897) 00:14:17.916 fused_ordering(898) 00:14:17.916 fused_ordering(899) 00:14:17.916 fused_ordering(900) 00:14:17.916 fused_ordering(901) 00:14:17.916 fused_ordering(902) 00:14:17.916 fused_ordering(903) 00:14:17.916 fused_ordering(904) 00:14:17.916 fused_ordering(905) 00:14:17.916 fused_ordering(906) 00:14:17.916 fused_ordering(907) 00:14:17.916 fused_ordering(908) 00:14:17.916 fused_ordering(909) 00:14:17.916 fused_ordering(910) 00:14:17.916 fused_ordering(911) 00:14:17.916 fused_ordering(912) 00:14:17.916 fused_ordering(913) 00:14:17.916 fused_ordering(914) 00:14:17.916 fused_ordering(915) 00:14:17.916 fused_ordering(916) 00:14:17.916 fused_ordering(917) 00:14:17.916 fused_ordering(918) 00:14:17.916 fused_ordering(919) 00:14:17.916 fused_ordering(920) 00:14:17.916 fused_ordering(921) 00:14:17.916 fused_ordering(922) 00:14:17.916 fused_ordering(923) 00:14:17.916 fused_ordering(924) 00:14:17.916 fused_ordering(925) 00:14:17.916 fused_ordering(926) 00:14:17.916 fused_ordering(927) 00:14:17.916 fused_ordering(928) 00:14:17.916 fused_ordering(929) 00:14:17.916 fused_ordering(930) 00:14:17.916 fused_ordering(931) 00:14:17.916 fused_ordering(932) 00:14:17.916 fused_ordering(933) 00:14:17.916 fused_ordering(934) 00:14:17.916 fused_ordering(935) 00:14:17.916 fused_ordering(936) 00:14:17.916 fused_ordering(937) 00:14:17.916 fused_ordering(938) 00:14:17.916 fused_ordering(939) 00:14:17.916 fused_ordering(940) 00:14:17.916 fused_ordering(941) 00:14:17.916 fused_ordering(942) 00:14:17.917 fused_ordering(943) 00:14:17.917 fused_ordering(944) 00:14:17.917 fused_ordering(945) 00:14:17.917 fused_ordering(946) 00:14:17.917 fused_ordering(947) 00:14:17.917 fused_ordering(948) 00:14:17.917 fused_ordering(949) 00:14:17.917 fused_ordering(950) 00:14:17.917 fused_ordering(951) 00:14:17.917 fused_ordering(952) 00:14:17.917 fused_ordering(953) 00:14:17.917 fused_ordering(954) 00:14:17.917 fused_ordering(955) 00:14:17.917 fused_ordering(956) 00:14:17.917 fused_ordering(957) 00:14:17.917 fused_ordering(958) 00:14:17.917 fused_ordering(959) 00:14:17.917 fused_ordering(960) 00:14:17.917 fused_ordering(961) 00:14:17.917 fused_ordering(962) 00:14:17.917 fused_ordering(963) 00:14:17.917 fused_ordering(964) 00:14:17.917 fused_ordering(965) 00:14:17.917 fused_ordering(966) 00:14:17.917 fused_ordering(967) 00:14:17.917 fused_ordering(968) 00:14:17.917 fused_ordering(969) 00:14:17.917 fused_ordering(970) 00:14:17.917 fused_ordering(971) 00:14:17.917 fused_ordering(972) 00:14:17.917 fused_ordering(973) 00:14:17.917 fused_ordering(974) 00:14:17.917 fused_ordering(975) 00:14:17.917 fused_ordering(976) 00:14:17.917 fused_ordering(977) 00:14:17.917 fused_ordering(978) 00:14:17.917 fused_ordering(979) 00:14:17.917 fused_ordering(980) 00:14:17.917 fused_ordering(981) 00:14:17.917 fused_ordering(982) 00:14:17.917 fused_ordering(983) 00:14:17.917 fused_ordering(984) 00:14:17.917 fused_ordering(985) 00:14:17.917 fused_ordering(986) 00:14:17.917 fused_ordering(987) 00:14:17.917 fused_ordering(988) 00:14:17.917 fused_ordering(989) 00:14:17.917 fused_ordering(990) 00:14:17.917 fused_ordering(991) 00:14:17.917 fused_ordering(992) 00:14:17.917 fused_ordering(993) 00:14:17.917 fused_ordering(994) 00:14:17.917 fused_ordering(995) 00:14:17.917 fused_ordering(996) 00:14:17.917 fused_ordering(997) 00:14:17.917 fused_ordering(998) 00:14:17.917 fused_ordering(999) 00:14:17.917 fused_ordering(1000) 00:14:17.917 fused_ordering(1001) 00:14:17.917 fused_ordering(1002) 00:14:17.917 fused_ordering(1003) 00:14:17.917 fused_ordering(1004) 00:14:17.917 fused_ordering(1005) 00:14:17.917 fused_ordering(1006) 00:14:17.917 fused_ordering(1007) 00:14:17.917 fused_ordering(1008) 00:14:17.917 fused_ordering(1009) 00:14:17.917 fused_ordering(1010) 00:14:17.917 fused_ordering(1011) 00:14:17.917 fused_ordering(1012) 00:14:17.917 fused_ordering(1013) 00:14:17.917 fused_ordering(1014) 00:14:17.917 fused_ordering(1015) 00:14:17.917 fused_ordering(1016) 00:14:17.917 fused_ordering(1017) 00:14:17.917 fused_ordering(1018) 00:14:17.917 fused_ordering(1019) 00:14:17.917 fused_ordering(1020) 00:14:17.917 fused_ordering(1021) 00:14:17.917 fused_ordering(1022) 00:14:17.917 fused_ordering(1023) 00:14:17.917 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:17.917 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:17.917 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:17.917 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:17.917 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:17.917 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:17.917 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:17.917 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:17.917 rmmod nvme_tcp 00:14:17.917 rmmod nvme_fabrics 00:14:17.917 rmmod nvme_keyring 00:14:17.917 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:17.917 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:17.917 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:17.917 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1592569 ']' 00:14:17.917 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1592569 00:14:17.917 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1592569 ']' 00:14:17.917 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1592569 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1592569 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1592569' 00:14:18.177 killing process with pid 1592569 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1592569 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1592569 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.177 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.713 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:20.713 00:14:20.713 real 0m11.635s 00:14:20.713 user 0m5.424s 00:14:20.713 sys 0m6.478s 00:14:20.713 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:20.713 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.713 ************************************ 00:14:20.713 END TEST nvmf_fused_ordering 00:14:20.713 ************************************ 00:14:20.713 14:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:20.713 14:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:20.713 14:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:20.713 14:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:20.713 ************************************ 00:14:20.713 START TEST nvmf_ns_masking 00:14:20.713 ************************************ 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:20.713 * Looking for test storage... 00:14:20.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:20.713 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:20.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.714 --rc genhtml_branch_coverage=1 00:14:20.714 --rc genhtml_function_coverage=1 00:14:20.714 --rc genhtml_legend=1 00:14:20.714 --rc geninfo_all_blocks=1 00:14:20.714 --rc geninfo_unexecuted_blocks=1 00:14:20.714 00:14:20.714 ' 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:20.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.714 --rc genhtml_branch_coverage=1 00:14:20.714 --rc genhtml_function_coverage=1 00:14:20.714 --rc genhtml_legend=1 00:14:20.714 --rc geninfo_all_blocks=1 00:14:20.714 --rc geninfo_unexecuted_blocks=1 00:14:20.714 00:14:20.714 ' 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:20.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.714 --rc genhtml_branch_coverage=1 00:14:20.714 --rc genhtml_function_coverage=1 00:14:20.714 --rc genhtml_legend=1 00:14:20.714 --rc geninfo_all_blocks=1 00:14:20.714 --rc geninfo_unexecuted_blocks=1 00:14:20.714 00:14:20.714 ' 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:20.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.714 --rc genhtml_branch_coverage=1 00:14:20.714 --rc genhtml_function_coverage=1 00:14:20.714 --rc genhtml_legend=1 00:14:20.714 --rc geninfo_all_blocks=1 00:14:20.714 --rc geninfo_unexecuted_blocks=1 00:14:20.714 00:14:20.714 ' 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:20.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=b03b7a15-63a2-472d-970d-de75e01bcd69 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=1e887ad4-6ce1-446d-a913-3d41cc0333fd 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=e7d8860c-4018-4172-8974-9cc5f769d66b 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.714 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.715 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:20.715 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:20.715 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:20.715 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:27.287 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:27.287 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:27.287 Found net devices under 0000:af:00.0: cvl_0_0 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:27.287 Found net devices under 0000:af:00.1: cvl_0_1 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:27.287 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:27.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:14:27.288 00:14:27.288 --- 10.0.0.2 ping statistics --- 00:14:27.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.288 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:14:27.288 00:14:27.288 --- 10.0.0.1 ping statistics --- 00:14:27.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.288 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:27.288 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:27.288 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:27.288 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:27.288 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:27.288 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:27.288 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1596856 00:14:27.288 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1596856 00:14:27.288 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:27.288 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1596856 ']' 00:14:27.288 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.288 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.288 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.288 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.288 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:27.547 [2024-12-10 14:16:28.066482] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:14:27.547 [2024-12-10 14:16:28.066530] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.547 [2024-12-10 14:16:28.154645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.547 [2024-12-10 14:16:28.192615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.547 [2024-12-10 14:16:28.192649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.547 [2024-12-10 14:16:28.192656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.547 [2024-12-10 14:16:28.192662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.547 [2024-12-10 14:16:28.192667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.547 [2024-12-10 14:16:28.193158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.806 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.806 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:27.806 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:27.806 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:27.806 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:27.806 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.806 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:27.806 [2024-12-10 14:16:28.497854] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.806 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:27.806 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:27.806 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:28.065 Malloc1 00:14:28.065 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:28.323 Malloc2 00:14:28.323 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:28.581 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:28.840 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.840 [2024-12-10 14:16:29.506947] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.840 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:28.840 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e7d8860c-4018-4172-8974-9cc5f769d66b -a 10.0.0.2 -s 4420 -i 4 00:14:29.099 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:29.099 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:29.099 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:29.099 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:29.099 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:31.006 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:31.006 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:31.006 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:31.006 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:31.006 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:31.006 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:31.006 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:31.006 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:31.266 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:31.266 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:31.266 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:31.266 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.266 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:31.266 [ 0]:0x1 00:14:31.266 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:31.266 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.266 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e48d40cacdd449ac81fbc31e86b355b6 00:14:31.266 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e48d40cacdd449ac81fbc31e86b355b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.266 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:31.525 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:31.525 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.525 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:31.525 [ 0]:0x1 00:14:31.525 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:31.525 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.525 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e48d40cacdd449ac81fbc31e86b355b6 00:14:31.525 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e48d40cacdd449ac81fbc31e86b355b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.525 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:31.525 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.525 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:31.525 [ 1]:0x2 00:14:31.525 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:31.525 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.525 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dee20a607be14c669c0a9fd8e47c64a5 00:14:31.525 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dee20a607be14c669c0a9fd8e47c64a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.525 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:31.526 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:31.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.784 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.043 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:32.043 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:32.043 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e7d8860c-4018-4172-8974-9cc5f769d66b -a 10.0.0.2 -s 4420 -i 4 00:14:32.301 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:32.301 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:32.301 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.301 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:32.301 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:32.301 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:34.835 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:34.835 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:34.835 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:34.835 [ 0]:0x2 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dee20a607be14c669c0a9fd8e47c64a5 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dee20a607be14c669c0a9fd8e47c64a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:34.835 [ 0]:0x1 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e48d40cacdd449ac81fbc31e86b355b6 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e48d40cacdd449ac81fbc31e86b355b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:34.835 [ 1]:0x2 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dee20a607be14c669c0a9fd8e47c64a5 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dee20a607be14c669c0a9fd8e47c64a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.835 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.094 [ 0]:0x2 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:35.094 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.353 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dee20a607be14c669c0a9fd8e47c64a5 00:14:35.353 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dee20a607be14c669c0a9fd8e47c64a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.353 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:35.353 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:35.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.353 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:35.612 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:35.612 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e7d8860c-4018-4172-8974-9cc5f769d66b -a 10.0.0.2 -s 4420 -i 4 00:14:35.612 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:35.612 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:35.612 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:35.612 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:35.612 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:35.612 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:38.146 [ 0]:0x1 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e48d40cacdd449ac81fbc31e86b355b6 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e48d40cacdd449ac81fbc31e86b355b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:38.146 [ 1]:0x2 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dee20a607be14c669c0a9fd8e47c64a5 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dee20a607be14c669c0a9fd8e47c64a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:38.146 [ 0]:0x2 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dee20a607be14c669c0a9fd8e47c64a5 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dee20a607be14c669c0a9fd8e47c64a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:38.146 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.405 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:38.405 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.405 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:38.405 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.405 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:38.405 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:38.405 [2024-12-10 14:16:39.054295] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:38.405 request: 00:14:38.405 { 00:14:38.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:38.405 "nsid": 2, 00:14:38.405 "host": "nqn.2016-06.io.spdk:host1", 00:14:38.405 "method": "nvmf_ns_remove_host", 00:14:38.405 "req_id": 1 00:14:38.405 } 00:14:38.405 Got JSON-RPC error response 00:14:38.405 response: 00:14:38.405 { 00:14:38.405 "code": -32602, 00:14:38.405 "message": "Invalid parameters" 00:14:38.405 } 00:14:38.405 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:38.405 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:38.405 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:38.405 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:38.405 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:38.405 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:38.405 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:38.405 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:38.405 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:38.405 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:38.405 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:38.405 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:38.405 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.406 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:38.406 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:38.406 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.406 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:38.406 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.406 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:38.406 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:38.406 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:38.406 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:38.406 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:38.664 [ 0]:0x2 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dee20a607be14c669c0a9fd8e47c64a5 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dee20a607be14c669c0a9fd8e47c64a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:38.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1598905 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1598905 /var/tmp/host.sock 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1598905 ']' 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:38.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.664 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:38.664 [2024-12-10 14:16:39.387903] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:14:38.665 [2024-12-10 14:16:39.387950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1598905 ] 00:14:38.924 [2024-12-10 14:16:39.469387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.924 [2024-12-10 14:16:39.508573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.182 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.183 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:39.183 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.183 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:39.442 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid b03b7a15-63a2-472d-970d-de75e01bcd69 00:14:39.442 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:39.442 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g B03B7A1563A2472D970DDE75E01BCD69 -i 00:14:39.700 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 1e887ad4-6ce1-446d-a913-3d41cc0333fd 00:14:39.700 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:39.700 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 1E887AD46CE1446DA9133D41CC0333FD -i 00:14:39.959 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:40.218 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:40.218 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:40.218 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:40.477 nvme0n1 00:14:40.477 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:40.477 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:41.045 nvme1n2 00:14:41.045 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:41.045 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:41.045 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:41.045 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:41.045 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:41.045 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:41.045 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:41.045 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:41.045 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:41.304 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ b03b7a15-63a2-472d-970d-de75e01bcd69 == \b\0\3\b\7\a\1\5\-\6\3\a\2\-\4\7\2\d\-\9\7\0\d\-\d\e\7\5\e\0\1\b\c\d\6\9 ]] 00:14:41.304 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:41.304 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:41.304 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:41.562 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 1e887ad4-6ce1-446d-a913-3d41cc0333fd == \1\e\8\8\7\a\d\4\-\6\c\e\1\-\4\4\6\d\-\a\9\1\3\-\3\d\4\1\c\c\0\3\3\3\f\d ]] 00:14:41.562 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.821 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:41.821 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid b03b7a15-63a2-472d-970d-de75e01bcd69 00:14:41.821 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:41.821 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B03B7A1563A2472D970DDE75E01BCD69 00:14:41.821 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:41.821 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B03B7A1563A2472D970DDE75E01BCD69 00:14:41.821 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:41.821 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:41.821 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:41.821 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:41.821 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:41.821 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:41.821 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:41.821 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:41.821 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B03B7A1563A2472D970DDE75E01BCD69 00:14:42.080 [2024-12-10 14:16:42.672235] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:42.080 [2024-12-10 14:16:42.672267] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:42.080 [2024-12-10 14:16:42.672275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.080 request: 00:14:42.080 { 00:14:42.080 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.080 "namespace": { 00:14:42.080 "bdev_name": "invalid", 00:14:42.080 "nsid": 1, 00:14:42.080 "nguid": "B03B7A1563A2472D970DDE75E01BCD69", 00:14:42.080 "no_auto_visible": false, 00:14:42.080 "hide_metadata": false 00:14:42.080 }, 00:14:42.080 "method": "nvmf_subsystem_add_ns", 00:14:42.080 "req_id": 1 00:14:42.080 } 00:14:42.080 Got JSON-RPC error response 00:14:42.080 response: 00:14:42.080 { 00:14:42.080 "code": -32602, 00:14:42.080 "message": "Invalid parameters" 00:14:42.080 } 00:14:42.080 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:42.080 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:42.080 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:42.080 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:42.080 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid b03b7a15-63a2-472d-970d-de75e01bcd69 00:14:42.080 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:42.080 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g B03B7A1563A2472D970DDE75E01BCD69 -i 00:14:42.339 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:44.243 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:44.243 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:44.243 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:44.502 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:44.502 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1598905 00:14:44.502 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1598905 ']' 00:14:44.502 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1598905 00:14:44.502 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:44.502 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.502 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1598905 00:14:44.502 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:44.502 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:44.502 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1598905' 00:14:44.502 killing process with pid 1598905 00:14:44.502 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1598905 00:14:44.502 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1598905 00:14:44.765 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.023 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:45.023 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:45.023 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:45.023 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:45.023 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:45.023 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:45.023 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:45.023 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:45.023 rmmod nvme_tcp 00:14:45.023 rmmod nvme_fabrics 00:14:45.023 rmmod nvme_keyring 00:14:45.023 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:45.023 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:45.023 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:45.023 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1596856 ']' 00:14:45.023 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1596856 00:14:45.023 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1596856 ']' 00:14:45.023 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1596856 00:14:45.024 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:45.024 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.024 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1596856 00:14:45.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:45.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:45.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1596856' 00:14:45.283 killing process with pid 1596856 00:14:45.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1596856 00:14:45.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1596856 00:14:45.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:45.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:45.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:45.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:45.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:45.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:45.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:45.283 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:45.283 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:45.283 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.283 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.283 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:47.819 00:14:47.819 real 0m27.055s 00:14:47.819 user 0m31.429s 00:14:47.819 sys 0m7.736s 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:47.819 ************************************ 00:14:47.819 END TEST nvmf_ns_masking 00:14:47.819 ************************************ 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:47.819 ************************************ 00:14:47.819 START TEST nvmf_nvme_cli 00:14:47.819 ************************************ 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:47.819 * Looking for test storage... 00:14:47.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:47.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.819 --rc genhtml_branch_coverage=1 00:14:47.819 --rc genhtml_function_coverage=1 00:14:47.819 --rc genhtml_legend=1 00:14:47.819 --rc geninfo_all_blocks=1 00:14:47.819 --rc geninfo_unexecuted_blocks=1 00:14:47.819 00:14:47.819 ' 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:47.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.819 --rc genhtml_branch_coverage=1 00:14:47.819 --rc genhtml_function_coverage=1 00:14:47.819 --rc genhtml_legend=1 00:14:47.819 --rc geninfo_all_blocks=1 00:14:47.819 --rc geninfo_unexecuted_blocks=1 00:14:47.819 00:14:47.819 ' 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:47.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.819 --rc genhtml_branch_coverage=1 00:14:47.819 --rc genhtml_function_coverage=1 00:14:47.819 --rc genhtml_legend=1 00:14:47.819 --rc geninfo_all_blocks=1 00:14:47.819 --rc geninfo_unexecuted_blocks=1 00:14:47.819 00:14:47.819 ' 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:47.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.819 --rc genhtml_branch_coverage=1 00:14:47.819 --rc genhtml_function_coverage=1 00:14:47.819 --rc genhtml_legend=1 00:14:47.819 --rc geninfo_all_blocks=1 00:14:47.819 --rc geninfo_unexecuted_blocks=1 00:14:47.819 00:14:47.819 ' 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.819 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:47.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:47.820 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:54.391 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:54.391 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:54.391 Found net devices under 0000:af:00.0: cvl_0_0 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:54.391 Found net devices under 0000:af:00.1: cvl_0_1 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.391 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:54.392 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:54.392 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:54.392 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:54.392 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:54.392 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:54.392 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:54.392 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:54.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:14:54.392 00:14:54.392 --- 10.0.0.2 ping statistics --- 00:14:54.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.392 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:54.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:14:54.392 00:14:54.392 --- 10.0.0.1 ping statistics --- 00:14:54.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.392 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1603976 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1603976 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1603976 ']' 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.392 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.650 [2024-12-10 14:16:55.163620] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:14:54.650 [2024-12-10 14:16:55.163664] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.650 [2024-12-10 14:16:55.246011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.650 [2024-12-10 14:16:55.285582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.650 [2024-12-10 14:16:55.285620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.650 [2024-12-10 14:16:55.285626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.650 [2024-12-10 14:16:55.285632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.650 [2024-12-10 14:16:55.285636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.650 [2024-12-10 14:16:55.287189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.650 [2024-12-10 14:16:55.287318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.650 [2024-12-10 14:16:55.287350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.650 [2024-12-10 14:16:55.287352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:55.586 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.586 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:55.586 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:55.586 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:55.586 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.586 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.587 [2024-12-10 14:16:56.042897] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.587 Malloc0 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.587 Malloc1 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.587 [2024-12-10 14:16:56.141442] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.587 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:55.845 00:14:55.845 Discovery Log Number of Records 2, Generation counter 2 00:14:55.845 =====Discovery Log Entry 0====== 00:14:55.845 trtype: tcp 00:14:55.845 adrfam: ipv4 00:14:55.845 subtype: current discovery subsystem 00:14:55.845 treq: not required 00:14:55.845 portid: 0 00:14:55.845 trsvcid: 4420 00:14:55.845 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:55.845 traddr: 10.0.0.2 00:14:55.845 eflags: explicit discovery connections, duplicate discovery information 00:14:55.845 sectype: none 00:14:55.845 =====Discovery Log Entry 1====== 00:14:55.845 trtype: tcp 00:14:55.845 adrfam: ipv4 00:14:55.845 subtype: nvme subsystem 00:14:55.845 treq: not required 00:14:55.845 portid: 0 00:14:55.845 trsvcid: 4420 00:14:55.845 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:55.845 traddr: 10.0.0.2 00:14:55.845 eflags: none 00:14:55.845 sectype: none 00:14:55.845 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:55.845 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:55.846 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:55.846 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:55.846 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:55.846 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:55.846 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:55.846 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:55.846 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:55.846 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:55.846 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:14:55.846 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:55.846 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:55.846 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:14:55.846 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:55.846 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=2 00:14:55.846 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:56.781 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:56.781 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:56.781 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.781 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:56.781 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:56.781 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:59.313 /dev/nvme0n2 00:14:59.313 /dev/nvme1n1 00:14:59.313 /dev/nvme1n2 ]] 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:59.313 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=4 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:59.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:59.314 rmmod nvme_tcp 00:14:59.314 rmmod nvme_fabrics 00:14:59.314 rmmod nvme_keyring 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1603976 ']' 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1603976 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1603976 ']' 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1603976 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1603976 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1603976' 00:14:59.314 killing process with pid 1603976 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1603976 00:14:59.314 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1603976 00:14:59.314 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:59.314 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:59.314 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:59.314 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:59.314 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:59.314 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:59.314 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:59.314 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:59.314 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:59.573 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.573 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.573 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.477 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:01.477 00:15:01.477 real 0m13.970s 00:15:01.477 user 0m20.988s 00:15:01.477 sys 0m5.747s 00:15:01.477 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.477 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.477 ************************************ 00:15:01.477 END TEST nvmf_nvme_cli 00:15:01.477 ************************************ 00:15:01.477 14:17:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:01.477 14:17:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:01.477 14:17:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:01.477 14:17:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.477 14:17:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:01.477 ************************************ 00:15:01.477 START TEST nvmf_vfio_user 00:15:01.477 ************************************ 00:15:01.477 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:01.737 * Looking for test storage... 00:15:01.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.737 --rc genhtml_branch_coverage=1 00:15:01.737 --rc genhtml_function_coverage=1 00:15:01.737 --rc genhtml_legend=1 00:15:01.737 --rc geninfo_all_blocks=1 00:15:01.737 --rc geninfo_unexecuted_blocks=1 00:15:01.737 00:15:01.737 ' 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.737 --rc genhtml_branch_coverage=1 00:15:01.737 --rc genhtml_function_coverage=1 00:15:01.737 --rc genhtml_legend=1 00:15:01.737 --rc geninfo_all_blocks=1 00:15:01.737 --rc geninfo_unexecuted_blocks=1 00:15:01.737 00:15:01.737 ' 00:15:01.737 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.737 --rc genhtml_branch_coverage=1 00:15:01.738 --rc genhtml_function_coverage=1 00:15:01.738 --rc genhtml_legend=1 00:15:01.738 --rc geninfo_all_blocks=1 00:15:01.738 --rc geninfo_unexecuted_blocks=1 00:15:01.738 00:15:01.738 ' 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:01.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.738 --rc genhtml_branch_coverage=1 00:15:01.738 --rc genhtml_function_coverage=1 00:15:01.738 --rc genhtml_legend=1 00:15:01.738 --rc geninfo_all_blocks=1 00:15:01.738 --rc geninfo_unexecuted_blocks=1 00:15:01.738 00:15:01.738 ' 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:01.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1605257 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1605257' 00:15:01.738 Process pid: 1605257 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1605257 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1605257 ']' 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.738 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:01.738 [2024-12-10 14:17:02.444995] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:15:01.738 [2024-12-10 14:17:02.445043] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.997 [2024-12-10 14:17:02.523801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:01.997 [2024-12-10 14:17:02.561989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.997 [2024-12-10 14:17:02.562025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.997 [2024-12-10 14:17:02.562032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.997 [2024-12-10 14:17:02.562037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.997 [2024-12-10 14:17:02.562042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.997 [2024-12-10 14:17:02.563563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.997 [2024-12-10 14:17:02.563669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.997 [2024-12-10 14:17:02.563755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:01.997 [2024-12-10 14:17:02.563754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.997 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.997 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:01.997 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:02.932 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:03.191 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:03.191 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:03.191 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:03.191 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:03.191 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:03.449 Malloc1 00:15:03.449 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:03.708 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:04.033 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:04.033 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:04.033 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:04.033 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:04.323 Malloc2 00:15:04.323 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:04.588 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:04.588 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:04.848 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:04.848 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:04.848 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:04.848 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:04.848 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:04.848 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:04.848 [2024-12-10 14:17:05.483936] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:15:04.848 [2024-12-10 14:17:05.483963] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605737 ] 00:15:04.848 [2024-12-10 14:17:05.521728] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:04.848 [2024-12-10 14:17:05.527029] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:04.848 [2024-12-10 14:17:05.527050] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8965168000 00:15:04.848 [2024-12-10 14:17:05.528022] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:04.848 [2024-12-10 14:17:05.529022] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:04.848 [2024-12-10 14:17:05.530036] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:04.848 [2024-12-10 14:17:05.531036] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:04.848 [2024-12-10 14:17:05.532040] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:04.848 [2024-12-10 14:17:05.533046] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:04.848 [2024-12-10 14:17:05.534050] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:04.849 [2024-12-10 14:17:05.535057] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:04.849 [2024-12-10 14:17:05.536066] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:04.849 [2024-12-10 14:17:05.536074] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f896515d000 00:15:04.849 [2024-12-10 14:17:05.536988] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:04.849 [2024-12-10 14:17:05.545597] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:04.849 [2024-12-10 14:17:05.545627] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:04.849 [2024-12-10 14:17:05.553177] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:04.849 [2024-12-10 14:17:05.553215] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:04.849 [2024-12-10 14:17:05.553288] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:04.849 [2024-12-10 14:17:05.553307] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:04.849 [2024-12-10 14:17:05.553312] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:04.849 [2024-12-10 14:17:05.554181] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:04.849 [2024-12-10 14:17:05.554191] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:04.849 [2024-12-10 14:17:05.554197] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:04.849 [2024-12-10 14:17:05.555185] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:04.849 [2024-12-10 14:17:05.555193] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:04.849 [2024-12-10 14:17:05.555199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:04.849 [2024-12-10 14:17:05.556187] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:04.849 [2024-12-10 14:17:05.556194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:04.849 [2024-12-10 14:17:05.557191] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:04.849 [2024-12-10 14:17:05.557198] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:04.849 [2024-12-10 14:17:05.557203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:04.849 [2024-12-10 14:17:05.557209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:04.849 [2024-12-10 14:17:05.557316] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:04.849 [2024-12-10 14:17:05.557320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:04.849 [2024-12-10 14:17:05.557325] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:04.849 [2024-12-10 14:17:05.558199] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:04.849 [2024-12-10 14:17:05.559207] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:04.849 [2024-12-10 14:17:05.560215] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:04.849 [2024-12-10 14:17:05.561220] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:04.849 [2024-12-10 14:17:05.561280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:04.849 [2024-12-10 14:17:05.562231] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:04.849 [2024-12-10 14:17:05.562239] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:04.849 [2024-12-10 14:17:05.562246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:04.849 [2024-12-10 14:17:05.562262] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:04.849 [2024-12-10 14:17:05.562273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:04.849 [2024-12-10 14:17:05.562292] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:04.849 [2024-12-10 14:17:05.562297] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:04.849 [2024-12-10 14:17:05.562301] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:04.849 [2024-12-10 14:17:05.562314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:04.849 [2024-12-10 14:17:05.562348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:04.849 [2024-12-10 14:17:05.562358] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:04.849 [2024-12-10 14:17:05.562364] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:04.849 [2024-12-10 14:17:05.562368] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:04.849 [2024-12-10 14:17:05.562372] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:04.849 [2024-12-10 14:17:05.562377] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:04.849 [2024-12-10 14:17:05.562381] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:04.849 [2024-12-10 14:17:05.562385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:04.849 [2024-12-10 14:17:05.562391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:04.849 [2024-12-10 14:17:05.562401] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:04.849 [2024-12-10 14:17:05.562418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:04.849 [2024-12-10 14:17:05.562427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.849 [2024-12-10 14:17:05.562435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.849 [2024-12-10 14:17:05.562443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.849 [2024-12-10 14:17:05.562450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.849 [2024-12-10 14:17:05.562454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:04.849 [2024-12-10 14:17:05.562462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:04.849 [2024-12-10 14:17:05.562470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:04.849 [2024-12-10 14:17:05.562479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:04.849 [2024-12-10 14:17:05.562485] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:04.849 [2024-12-10 14:17:05.562489] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:04.849 [2024-12-10 14:17:05.562495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:04.849 [2024-12-10 14:17:05.562501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:04.849 [2024-12-10 14:17:05.562509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:04.849 [2024-12-10 14:17:05.562515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:04.849 [2024-12-10 14:17:05.562565] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:04.849 [2024-12-10 14:17:05.562572] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:04.849 [2024-12-10 14:17:05.562579] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:04.849 [2024-12-10 14:17:05.562583] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:04.849 [2024-12-10 14:17:05.562586] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:04.849 [2024-12-10 14:17:05.562591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:04.849 [2024-12-10 14:17:05.562604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:04.849 [2024-12-10 14:17:05.562612] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:04.849 [2024-12-10 14:17:05.562621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:04.849 [2024-12-10 14:17:05.562628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:04.849 [2024-12-10 14:17:05.562634] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:04.849 [2024-12-10 14:17:05.562638] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:04.849 [2024-12-10 14:17:05.562641] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:04.849 [2024-12-10 14:17:05.562646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:04.849 [2024-12-10 14:17:05.562665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:04.850 [2024-12-10 14:17:05.562678] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:04.850 [2024-12-10 14:17:05.562685] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:04.850 [2024-12-10 14:17:05.562691] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:04.850 [2024-12-10 14:17:05.562695] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:04.850 [2024-12-10 14:17:05.562699] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:04.850 [2024-12-10 14:17:05.562705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:04.850 [2024-12-10 14:17:05.562715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:04.850 [2024-12-10 14:17:05.562722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:04.850 [2024-12-10 14:17:05.562728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:04.850 [2024-12-10 14:17:05.562734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:04.850 [2024-12-10 14:17:05.562741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:04.850 [2024-12-10 14:17:05.562746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:04.850 [2024-12-10 14:17:05.562751] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:04.850 [2024-12-10 14:17:05.562755] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:04.850 [2024-12-10 14:17:05.562759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:04.850 [2024-12-10 14:17:05.562764] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:04.850 [2024-12-10 14:17:05.562779] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:04.850 [2024-12-10 14:17:05.562788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:04.850 [2024-12-10 14:17:05.562798] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:04.850 [2024-12-10 14:17:05.562806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:04.850 [2024-12-10 14:17:05.562816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:04.850 [2024-12-10 14:17:05.562824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:04.850 [2024-12-10 14:17:05.562834] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:04.850 [2024-12-10 14:17:05.562846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:04.850 [2024-12-10 14:17:05.562859] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:04.850 [2024-12-10 14:17:05.562863] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:04.850 [2024-12-10 14:17:05.562866] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:04.850 [2024-12-10 14:17:05.562869] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:04.850 [2024-12-10 14:17:05.562872] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:04.850 [2024-12-10 14:17:05.562877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:04.850 [2024-12-10 14:17:05.562885] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:04.850 [2024-12-10 14:17:05.562889] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:04.850 [2024-12-10 14:17:05.562892] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:04.850 [2024-12-10 14:17:05.562897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:04.850 [2024-12-10 14:17:05.562903] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:04.850 [2024-12-10 14:17:05.562907] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:04.850 [2024-12-10 14:17:05.562910] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:04.850 [2024-12-10 14:17:05.562915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:04.850 [2024-12-10 14:17:05.562922] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:04.850 [2024-12-10 14:17:05.562926] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:04.850 [2024-12-10 14:17:05.562929] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:04.850 [2024-12-10 14:17:05.562934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:04.850 [2024-12-10 14:17:05.562939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:04.850 [2024-12-10 14:17:05.562949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:04.850 [2024-12-10 14:17:05.562958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:04.850 [2024-12-10 14:17:05.562964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:04.850 ===================================================== 00:15:04.850 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:04.850 ===================================================== 00:15:04.850 Controller Capabilities/Features 00:15:04.850 ================================ 00:15:04.850 Vendor ID: 4e58 00:15:04.850 Subsystem Vendor ID: 4e58 00:15:04.850 Serial Number: SPDK1 00:15:04.850 Model Number: SPDK bdev Controller 00:15:04.850 Firmware Version: 25.01 00:15:04.850 Recommended Arb Burst: 6 00:15:04.850 IEEE OUI Identifier: 8d 6b 50 00:15:04.850 Multi-path I/O 00:15:04.850 May have multiple subsystem ports: Yes 00:15:04.850 May have multiple controllers: Yes 00:15:04.850 Associated with SR-IOV VF: No 00:15:04.850 Max Data Transfer Size: 131072 00:15:04.850 Max Number of Namespaces: 32 00:15:04.850 Max Number of I/O Queues: 127 00:15:04.850 NVMe Specification Version (VS): 1.3 00:15:04.850 NVMe Specification Version (Identify): 1.3 00:15:04.850 Maximum Queue Entries: 256 00:15:04.850 Contiguous Queues Required: Yes 00:15:04.850 Arbitration Mechanisms Supported 00:15:04.850 Weighted Round Robin: Not Supported 00:15:04.850 Vendor Specific: Not Supported 00:15:04.850 Reset Timeout: 15000 ms 00:15:04.850 Doorbell Stride: 4 bytes 00:15:04.850 NVM Subsystem Reset: Not Supported 00:15:04.850 Command Sets Supported 00:15:04.850 NVM Command Set: Supported 00:15:04.850 Boot Partition: Not Supported 00:15:04.850 Memory Page Size Minimum: 4096 bytes 00:15:04.850 Memory Page Size Maximum: 4096 bytes 00:15:04.850 Persistent Memory Region: Not Supported 00:15:04.850 Optional Asynchronous Events Supported 00:15:04.850 Namespace Attribute Notices: Supported 00:15:04.850 Firmware Activation Notices: Not Supported 00:15:04.850 ANA Change Notices: Not Supported 00:15:04.850 PLE Aggregate Log Change Notices: Not Supported 00:15:04.850 LBA Status Info Alert Notices: Not Supported 00:15:04.850 EGE Aggregate Log Change Notices: Not Supported 00:15:04.850 Normal NVM Subsystem Shutdown event: Not Supported 00:15:04.850 Zone Descriptor Change Notices: Not Supported 00:15:04.850 Discovery Log Change Notices: Not Supported 00:15:04.850 Controller Attributes 00:15:04.850 128-bit Host Identifier: Supported 00:15:04.850 Non-Operational Permissive Mode: Not Supported 00:15:04.850 NVM Sets: Not Supported 00:15:04.850 Read Recovery Levels: Not Supported 00:15:04.850 Endurance Groups: Not Supported 00:15:04.850 Predictable Latency Mode: Not Supported 00:15:04.850 Traffic Based Keep ALive: Not Supported 00:15:04.850 Namespace Granularity: Not Supported 00:15:04.850 SQ Associations: Not Supported 00:15:04.850 UUID List: Not Supported 00:15:04.850 Multi-Domain Subsystem: Not Supported 00:15:04.850 Fixed Capacity Management: Not Supported 00:15:04.850 Variable Capacity Management: Not Supported 00:15:04.850 Delete Endurance Group: Not Supported 00:15:04.850 Delete NVM Set: Not Supported 00:15:04.850 Extended LBA Formats Supported: Not Supported 00:15:04.850 Flexible Data Placement Supported: Not Supported 00:15:04.850 00:15:04.850 Controller Memory Buffer Support 00:15:04.850 ================================ 00:15:04.850 Supported: No 00:15:04.850 00:15:04.850 Persistent Memory Region Support 00:15:04.850 ================================ 00:15:04.850 Supported: No 00:15:04.850 00:15:04.850 Admin Command Set Attributes 00:15:04.850 ============================ 00:15:04.850 Security Send/Receive: Not Supported 00:15:04.850 Format NVM: Not Supported 00:15:04.850 Firmware Activate/Download: Not Supported 00:15:04.850 Namespace Management: Not Supported 00:15:04.850 Device Self-Test: Not Supported 00:15:04.850 Directives: Not Supported 00:15:04.850 NVMe-MI: Not Supported 00:15:04.850 Virtualization Management: Not Supported 00:15:04.850 Doorbell Buffer Config: Not Supported 00:15:04.850 Get LBA Status Capability: Not Supported 00:15:04.850 Command & Feature Lockdown Capability: Not Supported 00:15:04.850 Abort Command Limit: 4 00:15:04.850 Async Event Request Limit: 4 00:15:04.850 Number of Firmware Slots: N/A 00:15:04.850 Firmware Slot 1 Read-Only: N/A 00:15:04.850 Firmware Activation Without Reset: N/A 00:15:04.851 Multiple Update Detection Support: N/A 00:15:04.851 Firmware Update Granularity: No Information Provided 00:15:04.851 Per-Namespace SMART Log: No 00:15:04.851 Asymmetric Namespace Access Log Page: Not Supported 00:15:04.851 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:04.851 Command Effects Log Page: Supported 00:15:04.851 Get Log Page Extended Data: Supported 00:15:04.851 Telemetry Log Pages: Not Supported 00:15:04.851 Persistent Event Log Pages: Not Supported 00:15:04.851 Supported Log Pages Log Page: May Support 00:15:04.851 Commands Supported & Effects Log Page: Not Supported 00:15:04.851 Feature Identifiers & Effects Log Page:May Support 00:15:04.851 NVMe-MI Commands & Effects Log Page: May Support 00:15:04.851 Data Area 4 for Telemetry Log: Not Supported 00:15:04.851 Error Log Page Entries Supported: 128 00:15:04.851 Keep Alive: Supported 00:15:04.851 Keep Alive Granularity: 10000 ms 00:15:04.851 00:15:04.851 NVM Command Set Attributes 00:15:04.851 ========================== 00:15:04.851 Submission Queue Entry Size 00:15:04.851 Max: 64 00:15:04.851 Min: 64 00:15:04.851 Completion Queue Entry Size 00:15:04.851 Max: 16 00:15:04.851 Min: 16 00:15:04.851 Number of Namespaces: 32 00:15:04.851 Compare Command: Supported 00:15:04.851 Write Uncorrectable Command: Not Supported 00:15:04.851 Dataset Management Command: Supported 00:15:04.851 Write Zeroes Command: Supported 00:15:04.851 Set Features Save Field: Not Supported 00:15:04.851 Reservations: Not Supported 00:15:04.851 Timestamp: Not Supported 00:15:04.851 Copy: Supported 00:15:04.851 Volatile Write Cache: Present 00:15:04.851 Atomic Write Unit (Normal): 1 00:15:04.851 Atomic Write Unit (PFail): 1 00:15:04.851 Atomic Compare & Write Unit: 1 00:15:04.851 Fused Compare & Write: Supported 00:15:04.851 Scatter-Gather List 00:15:04.851 SGL Command Set: Supported (Dword aligned) 00:15:04.851 SGL Keyed: Not Supported 00:15:04.851 SGL Bit Bucket Descriptor: Not Supported 00:15:04.851 SGL Metadata Pointer: Not Supported 00:15:04.851 Oversized SGL: Not Supported 00:15:04.851 SGL Metadata Address: Not Supported 00:15:04.851 SGL Offset: Not Supported 00:15:04.851 Transport SGL Data Block: Not Supported 00:15:04.851 Replay Protected Memory Block: Not Supported 00:15:04.851 00:15:04.851 Firmware Slot Information 00:15:04.851 ========================= 00:15:04.851 Active slot: 1 00:15:04.851 Slot 1 Firmware Revision: 25.01 00:15:04.851 00:15:04.851 00:15:04.851 Commands Supported and Effects 00:15:04.851 ============================== 00:15:04.851 Admin Commands 00:15:04.851 -------------- 00:15:04.851 Get Log Page (02h): Supported 00:15:04.851 Identify (06h): Supported 00:15:04.851 Abort (08h): Supported 00:15:04.851 Set Features (09h): Supported 00:15:04.851 Get Features (0Ah): Supported 00:15:04.851 Asynchronous Event Request (0Ch): Supported 00:15:04.851 Keep Alive (18h): Supported 00:15:04.851 I/O Commands 00:15:04.851 ------------ 00:15:04.851 Flush (00h): Supported LBA-Change 00:15:04.851 Write (01h): Supported LBA-Change 00:15:04.851 Read (02h): Supported 00:15:04.851 Compare (05h): Supported 00:15:04.851 Write Zeroes (08h): Supported LBA-Change 00:15:04.851 Dataset Management (09h): Supported LBA-Change 00:15:04.851 Copy (19h): Supported LBA-Change 00:15:04.851 00:15:04.851 Error Log 00:15:04.851 ========= 00:15:04.851 00:15:04.851 Arbitration 00:15:04.851 =========== 00:15:04.851 Arbitration Burst: 1 00:15:04.851 00:15:04.851 Power Management 00:15:04.851 ================ 00:15:04.851 Number of Power States: 1 00:15:04.851 Current Power State: Power State #0 00:15:04.851 Power State #0: 00:15:04.851 Max Power: 0.00 W 00:15:04.851 Non-Operational State: Operational 00:15:04.851 Entry Latency: Not Reported 00:15:04.851 Exit Latency: Not Reported 00:15:04.851 Relative Read Throughput: 0 00:15:04.851 Relative Read Latency: 0 00:15:04.851 Relative Write Throughput: 0 00:15:04.851 Relative Write Latency: 0 00:15:04.851 Idle Power: Not Reported 00:15:04.851 Active Power: Not Reported 00:15:04.851 Non-Operational Permissive Mode: Not Supported 00:15:04.851 00:15:04.851 Health Information 00:15:04.851 ================== 00:15:04.851 Critical Warnings: 00:15:04.851 Available Spare Space: OK 00:15:04.851 Temperature: OK 00:15:04.851 Device Reliability: OK 00:15:04.851 Read Only: No 00:15:04.851 Volatile Memory Backup: OK 00:15:04.851 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:04.851 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:04.851 Available Spare: 0% 00:15:04.851 Available Sp[2024-12-10 14:17:05.563048] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:04.851 [2024-12-10 14:17:05.563057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:04.851 [2024-12-10 14:17:05.563085] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:04.851 [2024-12-10 14:17:05.563095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.851 [2024-12-10 14:17:05.563101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.851 [2024-12-10 14:17:05.563106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.851 [2024-12-10 14:17:05.563112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.851 [2024-12-10 14:17:05.563239] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:04.851 [2024-12-10 14:17:05.563248] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:04.851 [2024-12-10 14:17:05.564246] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:04.851 [2024-12-10 14:17:05.564296] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:04.851 [2024-12-10 14:17:05.564302] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:04.851 [2024-12-10 14:17:05.565250] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:04.851 [2024-12-10 14:17:05.565260] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:04.851 [2024-12-10 14:17:05.565307] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:04.851 [2024-12-10 14:17:05.568224] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:05.110 are Threshold: 0% 00:15:05.110 Life Percentage Used: 0% 00:15:05.110 Data Units Read: 0 00:15:05.110 Data Units Written: 0 00:15:05.110 Host Read Commands: 0 00:15:05.110 Host Write Commands: 0 00:15:05.110 Controller Busy Time: 0 minutes 00:15:05.110 Power Cycles: 0 00:15:05.110 Power On Hours: 0 hours 00:15:05.110 Unsafe Shutdowns: 0 00:15:05.110 Unrecoverable Media Errors: 0 00:15:05.110 Lifetime Error Log Entries: 0 00:15:05.110 Warning Temperature Time: 0 minutes 00:15:05.110 Critical Temperature Time: 0 minutes 00:15:05.110 00:15:05.110 Number of Queues 00:15:05.110 ================ 00:15:05.110 Number of I/O Submission Queues: 127 00:15:05.110 Number of I/O Completion Queues: 127 00:15:05.110 00:15:05.110 Active Namespaces 00:15:05.110 ================= 00:15:05.110 Namespace ID:1 00:15:05.110 Error Recovery Timeout: Unlimited 00:15:05.110 Command Set Identifier: NVM (00h) 00:15:05.110 Deallocate: Supported 00:15:05.110 Deallocated/Unwritten Error: Not Supported 00:15:05.110 Deallocated Read Value: Unknown 00:15:05.110 Deallocate in Write Zeroes: Not Supported 00:15:05.110 Deallocated Guard Field: 0xFFFF 00:15:05.110 Flush: Supported 00:15:05.110 Reservation: Supported 00:15:05.110 Namespace Sharing Capabilities: Multiple Controllers 00:15:05.110 Size (in LBAs): 131072 (0GiB) 00:15:05.110 Capacity (in LBAs): 131072 (0GiB) 00:15:05.110 Utilization (in LBAs): 131072 (0GiB) 00:15:05.110 NGUID: 4A2048B6AD324DC1A0F5BE5EF421254D 00:15:05.110 UUID: 4a2048b6-ad32-4dc1-a0f5-be5ef421254d 00:15:05.110 Thin Provisioning: Not Supported 00:15:05.110 Per-NS Atomic Units: Yes 00:15:05.110 Atomic Boundary Size (Normal): 0 00:15:05.110 Atomic Boundary Size (PFail): 0 00:15:05.110 Atomic Boundary Offset: 0 00:15:05.110 Maximum Single Source Range Length: 65535 00:15:05.110 Maximum Copy Length: 65535 00:15:05.110 Maximum Source Range Count: 1 00:15:05.110 NGUID/EUI64 Never Reused: No 00:15:05.110 Namespace Write Protected: No 00:15:05.110 Number of LBA Formats: 1 00:15:05.110 Current LBA Format: LBA Format #00 00:15:05.110 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:05.110 00:15:05.110 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:05.110 [2024-12-10 14:17:05.780316] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:10.382 Initializing NVMe Controllers 00:15:10.382 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:10.382 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:10.382 Initialization complete. Launching workers. 00:15:10.382 ======================================================== 00:15:10.382 Latency(us) 00:15:10.382 Device Information : IOPS MiB/s Average min max 00:15:10.382 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39923.71 155.95 3205.94 975.36 7226.29 00:15:10.382 ======================================================== 00:15:10.382 Total : 39923.71 155.95 3205.94 975.36 7226.29 00:15:10.382 00:15:10.382 [2024-12-10 14:17:10.800429] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:10.382 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:10.382 [2024-12-10 14:17:11.034507] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:15.650 Initializing NVMe Controllers 00:15:15.650 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:15.650 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:15.650 Initialization complete. Launching workers. 00:15:15.650 ======================================================== 00:15:15.650 Latency(us) 00:15:15.650 Device Information : IOPS MiB/s Average min max 00:15:15.650 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15900.68 62.11 8055.36 6979.32 15962.96 00:15:15.650 ======================================================== 00:15:15.650 Total : 15900.68 62.11 8055.36 6979.32 15962.96 00:15:15.650 00:15:15.650 [2024-12-10 14:17:16.077800] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:15.650 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:15.650 [2024-12-10 14:17:16.290792] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:20.921 [2024-12-10 14:17:21.368544] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:20.921 Initializing NVMe Controllers 00:15:20.921 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:20.921 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:20.921 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:20.921 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:20.921 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:20.921 Initialization complete. Launching workers. 00:15:20.921 Starting thread on core 2 00:15:20.921 Starting thread on core 3 00:15:20.921 Starting thread on core 1 00:15:20.921 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:21.179 [2024-12-10 14:17:21.665572] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:24.474 [2024-12-10 14:17:24.728569] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:24.474 Initializing NVMe Controllers 00:15:24.475 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:24.475 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:24.475 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:24.475 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:24.475 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:24.475 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:24.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:24.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:24.475 Initialization complete. Launching workers. 00:15:24.475 Starting thread on core 1 with urgent priority queue 00:15:24.475 Starting thread on core 2 with urgent priority queue 00:15:24.475 Starting thread on core 3 with urgent priority queue 00:15:24.475 Starting thread on core 0 with urgent priority queue 00:15:24.475 SPDK bdev Controller (SPDK1 ) core 0: 7625.00 IO/s 13.11 secs/100000 ios 00:15:24.475 SPDK bdev Controller (SPDK1 ) core 1: 7322.33 IO/s 13.66 secs/100000 ios 00:15:24.475 SPDK bdev Controller (SPDK1 ) core 2: 9197.33 IO/s 10.87 secs/100000 ios 00:15:24.475 SPDK bdev Controller (SPDK1 ) core 3: 8474.33 IO/s 11.80 secs/100000 ios 00:15:24.475 ======================================================== 00:15:24.475 00:15:24.475 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:24.475 [2024-12-10 14:17:25.018626] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:24.475 Initializing NVMe Controllers 00:15:24.475 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:24.475 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:24.475 Namespace ID: 1 size: 0GB 00:15:24.475 Initialization complete. 00:15:24.475 INFO: using host memory buffer for IO 00:15:24.475 Hello world! 00:15:24.475 [2024-12-10 14:17:25.055868] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:24.475 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:24.734 [2024-12-10 14:17:25.343576] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:25.679 Initializing NVMe Controllers 00:15:25.679 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:25.679 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:25.679 Initialization complete. Launching workers. 00:15:25.679 submit (in ns) avg, min, max = 7036.8, 3240.0, 3999618.1 00:15:25.679 complete (in ns) avg, min, max = 21908.6, 1779.0, 3999499.0 00:15:25.679 00:15:25.679 Submit histogram 00:15:25.679 ================ 00:15:25.680 Range in us Cumulative Count 00:15:25.680 3.230 - 3.246: 0.0242% ( 4) 00:15:25.680 3.246 - 3.261: 0.5568% ( 88) 00:15:25.680 3.261 - 3.276: 2.8141% ( 373) 00:15:25.680 3.276 - 3.291: 7.5345% ( 780) 00:15:25.680 3.291 - 3.307: 12.8964% ( 886) 00:15:25.680 3.307 - 3.322: 18.9542% ( 1001) 00:15:25.680 3.322 - 3.337: 25.7262% ( 1119) 00:15:25.680 3.337 - 3.352: 31.6570% ( 980) 00:15:25.680 3.352 - 3.368: 37.6483% ( 990) 00:15:25.680 3.368 - 3.383: 43.2583% ( 927) 00:15:25.680 3.383 - 3.398: 49.2375% ( 988) 00:15:25.680 3.398 - 3.413: 54.6417% ( 893) 00:15:25.680 3.413 - 3.429: 61.8494% ( 1191) 00:15:25.680 3.429 - 3.444: 69.1661% ( 1209) 00:15:25.680 3.444 - 3.459: 74.4251% ( 869) 00:15:25.680 3.459 - 3.474: 79.0487% ( 764) 00:15:25.680 3.474 - 3.490: 82.1774% ( 517) 00:15:25.680 3.490 - 3.505: 84.5437% ( 391) 00:15:25.680 3.505 - 3.520: 85.9780% ( 237) 00:15:25.680 3.520 - 3.535: 86.8918% ( 151) 00:15:25.680 3.535 - 3.550: 87.4788% ( 97) 00:15:25.680 3.550 - 3.566: 88.0114% ( 88) 00:15:25.680 3.566 - 3.581: 88.7376% ( 120) 00:15:25.680 3.581 - 3.596: 89.4699% ( 121) 00:15:25.680 3.596 - 3.611: 90.3958% ( 153) 00:15:25.681 3.611 - 3.627: 91.4004% ( 166) 00:15:25.681 3.627 - 3.642: 92.3808% ( 162) 00:15:25.681 3.642 - 3.657: 93.3370% ( 158) 00:15:25.681 3.657 - 3.672: 94.2992% ( 159) 00:15:25.681 3.672 - 3.688: 95.3643% ( 176) 00:15:25.681 3.688 - 3.703: 96.2781% ( 151) 00:15:25.681 3.703 - 3.718: 97.0044% ( 120) 00:15:25.681 3.718 - 3.733: 97.6398% ( 105) 00:15:25.681 3.733 - 3.749: 98.1481% ( 84) 00:15:25.681 3.749 - 3.764: 98.4749% ( 54) 00:15:25.681 3.764 - 3.779: 98.8381% ( 60) 00:15:25.681 3.779 - 3.794: 99.0741% ( 39) 00:15:25.681 3.794 - 3.810: 99.2133% ( 23) 00:15:25.681 3.810 - 3.825: 99.3706% ( 26) 00:15:25.681 3.825 - 3.840: 99.4553% ( 14) 00:15:25.681 3.840 - 3.855: 99.4977% ( 7) 00:15:25.681 3.855 - 3.870: 99.5280% ( 5) 00:15:25.681 3.870 - 3.886: 99.5461% ( 3) 00:15:25.681 3.886 - 3.901: 99.5522% ( 1) 00:15:25.681 3.901 - 3.931: 99.5582% ( 1) 00:15:25.681 3.962 - 3.992: 99.5643% ( 1) 00:15:25.681 4.023 - 4.053: 99.5703% ( 1) 00:15:25.681 4.114 - 4.145: 99.5764% ( 1) 00:15:25.681 5.394 - 5.425: 99.5824% ( 1) 00:15:25.681 5.577 - 5.608: 99.5885% ( 1) 00:15:25.681 5.608 - 5.638: 99.5945% ( 1) 00:15:25.681 5.730 - 5.760: 99.6006% ( 1) 00:15:25.681 5.943 - 5.973: 99.6127% ( 2) 00:15:25.681 5.973 - 6.004: 99.6187% ( 1) 00:15:25.681 6.004 - 6.034: 99.6248% ( 1) 00:15:25.681 6.095 - 6.126: 99.6308% ( 1) 00:15:25.681 6.156 - 6.187: 99.6369% ( 1) 00:15:25.681 6.217 - 6.248: 99.6429% ( 1) 00:15:25.682 6.309 - 6.339: 99.6490% ( 1) 00:15:25.682 6.370 - 6.400: 99.6611% ( 2) 00:15:25.682 6.461 - 6.491: 99.6793% ( 3) 00:15:25.682 6.583 - 6.613: 99.6853% ( 1) 00:15:25.682 6.613 - 6.644: 99.6914% ( 1) 00:15:25.682 6.644 - 6.674: 99.6974% ( 1) 00:15:25.682 6.705 - 6.735: 99.7095% ( 2) 00:15:25.682 6.979 - 7.010: 99.7277% ( 3) 00:15:25.682 7.010 - 7.040: 99.7337% ( 1) 00:15:25.682 7.040 - 7.070: 99.7458% ( 2) 00:15:25.682 7.070 - 7.101: 99.7519% ( 1) 00:15:25.682 7.101 - 7.131: 99.7579% ( 1) 00:15:25.682 7.192 - 7.223: 99.7640% ( 1) 00:15:25.682 7.223 - 7.253: 99.7821% ( 3) 00:15:25.682 7.284 - 7.314: 99.7942% ( 2) 00:15:25.682 7.314 - 7.345: 99.8003% ( 1) 00:15:25.682 7.345 - 7.375: 99.8184% ( 3) 00:15:25.682 7.436 - 7.467: 99.8245% ( 1) 00:15:25.682 7.467 - 7.497: 99.8305% ( 1) 00:15:25.682 7.924 - 7.985: 99.8366% ( 1) 00:15:25.682 7.985 - 8.046: 99.8427% ( 1) 00:15:25.682 [2024-12-10 14:17:26.365535] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:25.682 8.046 - 8.107: 99.8487% ( 1) 00:15:25.682 8.107 - 8.168: 99.8548% ( 1) 00:15:25.682 8.229 - 8.290: 99.8608% ( 1) 00:15:25.682 8.533 - 8.594: 99.8669% ( 1) 00:15:25.682 8.594 - 8.655: 99.8729% ( 1) 00:15:25.684 8.838 - 8.899: 99.8790% ( 1) 00:15:25.684 8.960 - 9.021: 99.8850% ( 1) 00:15:25.684 10.667 - 10.728: 99.8911% ( 1) 00:15:25.684 10.910 - 10.971: 99.8971% ( 1) 00:15:25.684 40.229 - 40.472: 99.9032% ( 1) 00:15:25.684 70.705 - 71.192: 99.9092% ( 1) 00:15:25.684 3573.272 - 3588.876: 99.9153% ( 1) 00:15:25.684 3994.575 - 4025.783: 100.0000% ( 14) 00:15:25.684 00:15:25.684 Complete histogram 00:15:25.684 ================== 00:15:25.684 Range in us Cumulative Count 00:15:25.684 1.775 - 1.783: 0.0303% ( 5) 00:15:25.684 1.783 - 1.790: 1.0288% ( 165) 00:15:25.684 1.790 - 1.798: 10.4212% ( 1552) 00:15:25.684 1.798 - 1.806: 34.0051% ( 3897) 00:15:25.684 1.806 - 1.813: 51.1135% ( 2827) 00:15:25.684 1.813 - 1.821: 56.8930% ( 955) 00:15:25.684 1.821 - 1.829: 59.7495% ( 472) 00:15:25.684 1.829 - 1.836: 61.8676% ( 350) 00:15:25.685 1.836 - 1.844: 63.5137% ( 272) 00:15:25.685 1.844 - 1.851: 68.3007% ( 791) 00:15:25.685 1.851 - 1.859: 78.6190% ( 1705) 00:15:25.685 1.859 - 1.867: 87.8540% ( 1526) 00:15:25.685 1.867 - 1.874: 92.5200% ( 771) 00:15:25.685 1.874 - 1.882: 94.8378% ( 383) 00:15:25.685 1.882 - 1.890: 96.3205% ( 245) 00:15:25.685 1.890 - 1.897: 96.9741% ( 108) 00:15:25.685 1.897 - 1.905: 97.3251% ( 58) 00:15:25.685 1.905 - 1.912: 97.5611% ( 39) 00:15:25.685 1.912 - 1.920: 97.8214% ( 43) 00:15:25.685 1.920 - 1.928: 98.1481% ( 54) 00:15:25.685 1.928 - 1.935: 98.4870% ( 56) 00:15:25.685 1.935 - 1.943: 98.8260% ( 56) 00:15:25.685 1.943 - 1.950: 99.0136% ( 31) 00:15:25.685 1.950 - 1.966: 99.1285% ( 19) 00:15:25.685 1.966 - 1.981: 99.1830% ( 9) 00:15:25.685 1.981 - 1.996: 99.2072% ( 4) 00:15:25.685 1.996 - 2.011: 99.2133% ( 1) 00:15:25.685 2.011 - 2.027: 99.2193% ( 1) 00:15:25.685 2.027 - 2.042: 99.2254% ( 1) 00:15:25.685 2.042 - 2.057: 99.2314% ( 1) 00:15:25.685 2.057 - 2.072: 99.2375% ( 1) 00:15:25.685 2.133 - 2.149: 99.2496% ( 2) 00:15:25.685 2.194 - 2.210: 99.2556% ( 1) 00:15:25.685 2.347 - 2.362: 99.2617% ( 1) 00:15:25.685 3.764 - 3.779: 99.2677% ( 1) 00:15:25.685 3.779 - 3.794: 99.2738% ( 1) 00:15:25.685 3.901 - 3.931: 99.2798% ( 1) 00:15:25.686 3.992 - 4.023: 99.2980% ( 3) 00:15:25.686 4.053 - 4.084: 99.3040% ( 1) 00:15:25.686 4.145 - 4.175: 99.3101% ( 1) 00:15:25.686 4.267 - 4.297: 99.3161% ( 1) 00:15:25.686 4.450 - 4.480: 99.3222% ( 1) 00:15:25.686 4.510 - 4.541: 99.3282% ( 1) 00:15:25.686 4.632 - 4.663: 99.3343% ( 1) 00:15:25.686 4.724 - 4.754: 99.3404% ( 1) 00:15:25.686 4.754 - 4.785: 99.3464% ( 1) 00:15:25.686 4.785 - 4.815: 99.3525% ( 1) 00:15:25.686 4.998 - 5.029: 99.3585% ( 1) 00:15:25.686 5.059 - 5.090: 99.3706% ( 2) 00:15:25.686 5.090 - 5.120: 99.3827% ( 2) 00:15:25.686 5.272 - 5.303: 99.3888% ( 1) 00:15:25.686 5.364 - 5.394: 99.3948% ( 1) 00:15:25.686 5.425 - 5.455: 99.4009% ( 1) 00:15:25.686 5.547 - 5.577: 99.4069% ( 1) 00:15:25.686 5.608 - 5.638: 99.4130% ( 1) 00:15:25.686 5.730 - 5.760: 99.4190% ( 1) 00:15:25.686 5.760 - 5.790: 99.4251% ( 1) 00:15:25.686 5.851 - 5.882: 99.4311% ( 1) 00:15:25.686 6.034 - 6.065: 99.4372% ( 1) 00:15:25.686 6.065 - 6.095: 99.4432% ( 1) 00:15:25.686 6.400 - 6.430: 99.4553% ( 2) 00:15:25.686 6.918 - 6.949: 99.4614% ( 1) 00:15:25.686 7.040 - 7.070: 99.4674% ( 1) 00:15:25.686 7.314 - 7.345: 99.4735% ( 1) 00:15:25.686 7.436 - 7.467: 99.4795% ( 1) 00:15:25.686 8.716 - 8.777: 99.4856% ( 1) 00:15:25.686 8.899 - 8.960: 99.4916% ( 1) 00:15:25.686 141.410 - 142.385: 99.4977% ( 1) 00:15:25.686 3994.575 - 4025.783: 100.0000% ( 83) 00:15:25.686 00:15:25.686 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:25.686 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:25.686 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:25.686 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:25.686 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:25.944 [ 00:15:25.945 { 00:15:25.945 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:25.945 "subtype": "Discovery", 00:15:25.945 "listen_addresses": [], 00:15:25.945 "allow_any_host": true, 00:15:25.945 "hosts": [] 00:15:25.945 }, 00:15:25.945 { 00:15:25.945 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:25.945 "subtype": "NVMe", 00:15:25.945 "listen_addresses": [ 00:15:25.945 { 00:15:25.945 "trtype": "VFIOUSER", 00:15:25.945 "adrfam": "IPv4", 00:15:25.945 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:25.945 "trsvcid": "0" 00:15:25.945 } 00:15:25.945 ], 00:15:25.945 "allow_any_host": true, 00:15:25.945 "hosts": [], 00:15:25.945 "serial_number": "SPDK1", 00:15:25.945 "model_number": "SPDK bdev Controller", 00:15:25.945 "max_namespaces": 32, 00:15:25.945 "min_cntlid": 1, 00:15:25.945 "max_cntlid": 65519, 00:15:25.945 "namespaces": [ 00:15:25.945 { 00:15:25.945 "nsid": 1, 00:15:25.945 "bdev_name": "Malloc1", 00:15:25.945 "name": "Malloc1", 00:15:25.945 "nguid": "4A2048B6AD324DC1A0F5BE5EF421254D", 00:15:25.945 "uuid": "4a2048b6-ad32-4dc1-a0f5-be5ef421254d" 00:15:25.945 } 00:15:25.945 ] 00:15:25.945 }, 00:15:25.945 { 00:15:25.945 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:25.945 "subtype": "NVMe", 00:15:25.945 "listen_addresses": [ 00:15:25.945 { 00:15:25.945 "trtype": "VFIOUSER", 00:15:25.945 "adrfam": "IPv4", 00:15:25.945 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:25.945 "trsvcid": "0" 00:15:25.945 } 00:15:25.945 ], 00:15:25.945 "allow_any_host": true, 00:15:25.945 "hosts": [], 00:15:25.945 "serial_number": "SPDK2", 00:15:25.945 "model_number": "SPDK bdev Controller", 00:15:25.945 "max_namespaces": 32, 00:15:25.945 "min_cntlid": 1, 00:15:25.945 "max_cntlid": 65519, 00:15:25.945 "namespaces": [ 00:15:25.945 { 00:15:25.945 "nsid": 1, 00:15:25.945 "bdev_name": "Malloc2", 00:15:25.945 "name": "Malloc2", 00:15:25.945 "nguid": "3C0BFA6668DC40FA888E6AC7067FAA2F", 00:15:25.945 "uuid": "3c0bfa66-68dc-40fa-888e-6ac7067faa2f" 00:15:25.945 } 00:15:25.945 ] 00:15:25.945 } 00:15:25.945 ] 00:15:25.945 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:25.945 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1609261 00:15:25.945 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:25.945 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:25.945 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:25.945 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:25.945 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:25.945 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:25.945 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:25.945 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:26.203 [2024-12-10 14:17:26.790722] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:26.203 Malloc3 00:15:26.203 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:26.462 [2024-12-10 14:17:27.032587] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:26.462 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:26.462 Asynchronous Event Request test 00:15:26.462 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:26.462 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:26.462 Registering asynchronous event callbacks... 00:15:26.462 Starting namespace attribute notice tests for all controllers... 00:15:26.462 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:26.462 aer_cb - Changed Namespace 00:15:26.462 Cleaning up... 00:15:26.722 [ 00:15:26.722 { 00:15:26.722 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:26.722 "subtype": "Discovery", 00:15:26.722 "listen_addresses": [], 00:15:26.722 "allow_any_host": true, 00:15:26.722 "hosts": [] 00:15:26.722 }, 00:15:26.722 { 00:15:26.722 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:26.722 "subtype": "NVMe", 00:15:26.722 "listen_addresses": [ 00:15:26.722 { 00:15:26.722 "trtype": "VFIOUSER", 00:15:26.722 "adrfam": "IPv4", 00:15:26.722 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:26.722 "trsvcid": "0" 00:15:26.722 } 00:15:26.722 ], 00:15:26.722 "allow_any_host": true, 00:15:26.722 "hosts": [], 00:15:26.722 "serial_number": "SPDK1", 00:15:26.722 "model_number": "SPDK bdev Controller", 00:15:26.722 "max_namespaces": 32, 00:15:26.722 "min_cntlid": 1, 00:15:26.722 "max_cntlid": 65519, 00:15:26.722 "namespaces": [ 00:15:26.722 { 00:15:26.722 "nsid": 1, 00:15:26.722 "bdev_name": "Malloc1", 00:15:26.722 "name": "Malloc1", 00:15:26.722 "nguid": "4A2048B6AD324DC1A0F5BE5EF421254D", 00:15:26.722 "uuid": "4a2048b6-ad32-4dc1-a0f5-be5ef421254d" 00:15:26.722 }, 00:15:26.722 { 00:15:26.722 "nsid": 2, 00:15:26.722 "bdev_name": "Malloc3", 00:15:26.722 "name": "Malloc3", 00:15:26.722 "nguid": "01340A3081D24C4AAD3EC53BF506C9DD", 00:15:26.722 "uuid": "01340a30-81d2-4c4a-ad3e-c53bf506c9dd" 00:15:26.722 } 00:15:26.722 ] 00:15:26.722 }, 00:15:26.722 { 00:15:26.722 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:26.722 "subtype": "NVMe", 00:15:26.722 "listen_addresses": [ 00:15:26.722 { 00:15:26.722 "trtype": "VFIOUSER", 00:15:26.722 "adrfam": "IPv4", 00:15:26.722 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:26.722 "trsvcid": "0" 00:15:26.722 } 00:15:26.722 ], 00:15:26.722 "allow_any_host": true, 00:15:26.722 "hosts": [], 00:15:26.722 "serial_number": "SPDK2", 00:15:26.722 "model_number": "SPDK bdev Controller", 00:15:26.722 "max_namespaces": 32, 00:15:26.722 "min_cntlid": 1, 00:15:26.722 "max_cntlid": 65519, 00:15:26.722 "namespaces": [ 00:15:26.722 { 00:15:26.722 "nsid": 1, 00:15:26.722 "bdev_name": "Malloc2", 00:15:26.722 "name": "Malloc2", 00:15:26.722 "nguid": "3C0BFA6668DC40FA888E6AC7067FAA2F", 00:15:26.722 "uuid": "3c0bfa66-68dc-40fa-888e-6ac7067faa2f" 00:15:26.722 } 00:15:26.722 ] 00:15:26.722 } 00:15:26.722 ] 00:15:26.722 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1609261 00:15:26.722 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:26.722 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:26.722 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:26.722 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:26.722 [2024-12-10 14:17:27.282566] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:15:26.722 [2024-12-10 14:17:27.282600] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609364 ] 00:15:26.722 [2024-12-10 14:17:27.321599] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:26.722 [2024-12-10 14:17:27.326836] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:26.722 [2024-12-10 14:17:27.326860] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f74104d4000 00:15:26.722 [2024-12-10 14:17:27.327846] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:26.722 [2024-12-10 14:17:27.328849] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:26.722 [2024-12-10 14:17:27.329856] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:26.723 [2024-12-10 14:17:27.330859] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:26.723 [2024-12-10 14:17:27.331870] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:26.723 [2024-12-10 14:17:27.332877] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:26.723 [2024-12-10 14:17:27.333886] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:26.723 [2024-12-10 14:17:27.334889] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:26.723 [2024-12-10 14:17:27.335896] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:26.723 [2024-12-10 14:17:27.335905] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f74104c9000 00:15:26.723 [2024-12-10 14:17:27.336817] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:26.723 [2024-12-10 14:17:27.346180] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:26.723 [2024-12-10 14:17:27.346204] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:26.723 [2024-12-10 14:17:27.351285] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:26.723 [2024-12-10 14:17:27.351324] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:26.723 [2024-12-10 14:17:27.351395] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:26.723 [2024-12-10 14:17:27.351408] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:26.723 [2024-12-10 14:17:27.351415] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:26.723 [2024-12-10 14:17:27.352291] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:26.723 [2024-12-10 14:17:27.352300] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:26.723 [2024-12-10 14:17:27.352308] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:26.723 [2024-12-10 14:17:27.353291] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:26.723 [2024-12-10 14:17:27.353299] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:26.723 [2024-12-10 14:17:27.353306] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:26.723 [2024-12-10 14:17:27.354307] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:26.723 [2024-12-10 14:17:27.354315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:26.723 [2024-12-10 14:17:27.355315] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:26.723 [2024-12-10 14:17:27.355324] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:26.723 [2024-12-10 14:17:27.355328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:26.723 [2024-12-10 14:17:27.355334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:26.723 [2024-12-10 14:17:27.355442] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:26.723 [2024-12-10 14:17:27.355446] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:26.723 [2024-12-10 14:17:27.355451] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:26.723 [2024-12-10 14:17:27.356329] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:26.723 [2024-12-10 14:17:27.357335] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:26.723 [2024-12-10 14:17:27.358347] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:26.723 [2024-12-10 14:17:27.359349] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:26.723 [2024-12-10 14:17:27.359387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:26.723 [2024-12-10 14:17:27.360362] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:26.723 [2024-12-10 14:17:27.360370] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:26.723 [2024-12-10 14:17:27.360375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:26.723 [2024-12-10 14:17:27.360394] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:26.723 [2024-12-10 14:17:27.360401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:26.723 [2024-12-10 14:17:27.360415] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:26.723 [2024-12-10 14:17:27.360420] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:26.723 [2024-12-10 14:17:27.360423] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:26.723 [2024-12-10 14:17:27.360434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:26.723 [2024-12-10 14:17:27.369223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:26.723 [2024-12-10 14:17:27.369235] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:26.723 [2024-12-10 14:17:27.369242] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:26.723 [2024-12-10 14:17:27.369246] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:26.723 [2024-12-10 14:17:27.369251] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:26.723 [2024-12-10 14:17:27.369255] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:26.723 [2024-12-10 14:17:27.369259] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:26.723 [2024-12-10 14:17:27.369263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:26.723 [2024-12-10 14:17:27.369270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:26.723 [2024-12-10 14:17:27.369280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:26.723 [2024-12-10 14:17:27.377221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:26.723 [2024-12-10 14:17:27.377232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.723 [2024-12-10 14:17:27.377240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.723 [2024-12-10 14:17:27.377247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.723 [2024-12-10 14:17:27.377254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.723 [2024-12-10 14:17:27.377259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:26.723 [2024-12-10 14:17:27.377267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:26.723 [2024-12-10 14:17:27.377275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:26.723 [2024-12-10 14:17:27.385222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:26.723 [2024-12-10 14:17:27.385231] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:26.723 [2024-12-10 14:17:27.385238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:26.723 [2024-12-10 14:17:27.385245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:26.723 [2024-12-10 14:17:27.385250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:26.723 [2024-12-10 14:17:27.385258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:26.723 [2024-12-10 14:17:27.393221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:26.723 [2024-12-10 14:17:27.393279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:26.723 [2024-12-10 14:17:27.393286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:26.723 [2024-12-10 14:17:27.393293] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:26.723 [2024-12-10 14:17:27.393297] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:26.723 [2024-12-10 14:17:27.393301] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:26.723 [2024-12-10 14:17:27.393306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:26.723 [2024-12-10 14:17:27.401223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:26.723 [2024-12-10 14:17:27.401233] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:26.723 [2024-12-10 14:17:27.401246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:26.723 [2024-12-10 14:17:27.401253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:26.723 [2024-12-10 14:17:27.401259] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:26.723 [2024-12-10 14:17:27.401263] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:26.724 [2024-12-10 14:17:27.401266] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:26.724 [2024-12-10 14:17:27.401271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:26.724 [2024-12-10 14:17:27.409224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:26.724 [2024-12-10 14:17:27.409239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:26.724 [2024-12-10 14:17:27.409246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:26.724 [2024-12-10 14:17:27.409253] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:26.724 [2024-12-10 14:17:27.409257] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:26.724 [2024-12-10 14:17:27.409260] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:26.724 [2024-12-10 14:17:27.409266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:26.724 [2024-12-10 14:17:27.417224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:26.724 [2024-12-10 14:17:27.417240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:26.724 [2024-12-10 14:17:27.417246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:26.724 [2024-12-10 14:17:27.417253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:26.724 [2024-12-10 14:17:27.417260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:26.724 [2024-12-10 14:17:27.417265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:26.724 [2024-12-10 14:17:27.417270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:26.724 [2024-12-10 14:17:27.417275] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:26.724 [2024-12-10 14:17:27.417279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:26.724 [2024-12-10 14:17:27.417284] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:26.724 [2024-12-10 14:17:27.417299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:26.724 [2024-12-10 14:17:27.425225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:26.724 [2024-12-10 14:17:27.425239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:26.724 [2024-12-10 14:17:27.433224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:26.724 [2024-12-10 14:17:27.433237] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:26.724 [2024-12-10 14:17:27.441223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:26.724 [2024-12-10 14:17:27.441236] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:26.724 [2024-12-10 14:17:27.449224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:26.724 [2024-12-10 14:17:27.449239] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:26.724 [2024-12-10 14:17:27.449244] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:26.724 [2024-12-10 14:17:27.449247] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:26.724 [2024-12-10 14:17:27.449250] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:26.724 [2024-12-10 14:17:27.449253] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:26.724 [2024-12-10 14:17:27.449259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:26.724 [2024-12-10 14:17:27.449265] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:26.724 [2024-12-10 14:17:27.449269] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:26.724 [2024-12-10 14:17:27.449273] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:26.724 [2024-12-10 14:17:27.449279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:26.724 [2024-12-10 14:17:27.449285] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:26.724 [2024-12-10 14:17:27.449289] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:26.724 [2024-12-10 14:17:27.449292] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:26.724 [2024-12-10 14:17:27.449298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:26.724 [2024-12-10 14:17:27.449304] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:26.724 [2024-12-10 14:17:27.449308] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:26.724 [2024-12-10 14:17:27.449311] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:26.724 [2024-12-10 14:17:27.449316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:26.724 [2024-12-10 14:17:27.457223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:26.724 [2024-12-10 14:17:27.457236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:26.724 [2024-12-10 14:17:27.457245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:26.724 [2024-12-10 14:17:27.457251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:26.724 ===================================================== 00:15:26.724 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:26.724 ===================================================== 00:15:26.724 Controller Capabilities/Features 00:15:26.724 ================================ 00:15:26.724 Vendor ID: 4e58 00:15:26.724 Subsystem Vendor ID: 4e58 00:15:26.724 Serial Number: SPDK2 00:15:26.724 Model Number: SPDK bdev Controller 00:15:26.724 Firmware Version: 25.01 00:15:26.724 Recommended Arb Burst: 6 00:15:26.724 IEEE OUI Identifier: 8d 6b 50 00:15:26.724 Multi-path I/O 00:15:26.724 May have multiple subsystem ports: Yes 00:15:26.724 May have multiple controllers: Yes 00:15:26.724 Associated with SR-IOV VF: No 00:15:26.724 Max Data Transfer Size: 131072 00:15:26.724 Max Number of Namespaces: 32 00:15:26.724 Max Number of I/O Queues: 127 00:15:26.724 NVMe Specification Version (VS): 1.3 00:15:26.724 NVMe Specification Version (Identify): 1.3 00:15:26.724 Maximum Queue Entries: 256 00:15:26.724 Contiguous Queues Required: Yes 00:15:26.724 Arbitration Mechanisms Supported 00:15:26.724 Weighted Round Robin: Not Supported 00:15:26.724 Vendor Specific: Not Supported 00:15:26.724 Reset Timeout: 15000 ms 00:15:26.724 Doorbell Stride: 4 bytes 00:15:26.724 NVM Subsystem Reset: Not Supported 00:15:26.724 Command Sets Supported 00:15:26.724 NVM Command Set: Supported 00:15:26.724 Boot Partition: Not Supported 00:15:26.724 Memory Page Size Minimum: 4096 bytes 00:15:26.724 Memory Page Size Maximum: 4096 bytes 00:15:26.724 Persistent Memory Region: Not Supported 00:15:26.724 Optional Asynchronous Events Supported 00:15:26.724 Namespace Attribute Notices: Supported 00:15:26.724 Firmware Activation Notices: Not Supported 00:15:26.724 ANA Change Notices: Not Supported 00:15:26.724 PLE Aggregate Log Change Notices: Not Supported 00:15:26.724 LBA Status Info Alert Notices: Not Supported 00:15:26.724 EGE Aggregate Log Change Notices: Not Supported 00:15:26.724 Normal NVM Subsystem Shutdown event: Not Supported 00:15:26.724 Zone Descriptor Change Notices: Not Supported 00:15:26.724 Discovery Log Change Notices: Not Supported 00:15:26.724 Controller Attributes 00:15:26.724 128-bit Host Identifier: Supported 00:15:26.724 Non-Operational Permissive Mode: Not Supported 00:15:26.724 NVM Sets: Not Supported 00:15:26.724 Read Recovery Levels: Not Supported 00:15:26.724 Endurance Groups: Not Supported 00:15:26.724 Predictable Latency Mode: Not Supported 00:15:26.724 Traffic Based Keep ALive: Not Supported 00:15:26.724 Namespace Granularity: Not Supported 00:15:26.724 SQ Associations: Not Supported 00:15:26.724 UUID List: Not Supported 00:15:26.724 Multi-Domain Subsystem: Not Supported 00:15:26.724 Fixed Capacity Management: Not Supported 00:15:26.724 Variable Capacity Management: Not Supported 00:15:26.724 Delete Endurance Group: Not Supported 00:15:26.724 Delete NVM Set: Not Supported 00:15:26.724 Extended LBA Formats Supported: Not Supported 00:15:26.724 Flexible Data Placement Supported: Not Supported 00:15:26.724 00:15:26.724 Controller Memory Buffer Support 00:15:26.724 ================================ 00:15:26.724 Supported: No 00:15:26.724 00:15:26.724 Persistent Memory Region Support 00:15:26.724 ================================ 00:15:26.724 Supported: No 00:15:26.724 00:15:26.724 Admin Command Set Attributes 00:15:26.724 ============================ 00:15:26.724 Security Send/Receive: Not Supported 00:15:26.724 Format NVM: Not Supported 00:15:26.724 Firmware Activate/Download: Not Supported 00:15:26.724 Namespace Management: Not Supported 00:15:26.724 Device Self-Test: Not Supported 00:15:26.724 Directives: Not Supported 00:15:26.724 NVMe-MI: Not Supported 00:15:26.724 Virtualization Management: Not Supported 00:15:26.724 Doorbell Buffer Config: Not Supported 00:15:26.725 Get LBA Status Capability: Not Supported 00:15:26.725 Command & Feature Lockdown Capability: Not Supported 00:15:26.725 Abort Command Limit: 4 00:15:26.725 Async Event Request Limit: 4 00:15:26.725 Number of Firmware Slots: N/A 00:15:26.725 Firmware Slot 1 Read-Only: N/A 00:15:26.725 Firmware Activation Without Reset: N/A 00:15:26.725 Multiple Update Detection Support: N/A 00:15:26.725 Firmware Update Granularity: No Information Provided 00:15:26.725 Per-Namespace SMART Log: No 00:15:26.725 Asymmetric Namespace Access Log Page: Not Supported 00:15:26.725 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:26.725 Command Effects Log Page: Supported 00:15:26.725 Get Log Page Extended Data: Supported 00:15:26.725 Telemetry Log Pages: Not Supported 00:15:26.725 Persistent Event Log Pages: Not Supported 00:15:26.725 Supported Log Pages Log Page: May Support 00:15:26.725 Commands Supported & Effects Log Page: Not Supported 00:15:26.725 Feature Identifiers & Effects Log Page:May Support 00:15:26.725 NVMe-MI Commands & Effects Log Page: May Support 00:15:26.725 Data Area 4 for Telemetry Log: Not Supported 00:15:26.725 Error Log Page Entries Supported: 128 00:15:26.725 Keep Alive: Supported 00:15:26.725 Keep Alive Granularity: 10000 ms 00:15:26.725 00:15:26.725 NVM Command Set Attributes 00:15:26.725 ========================== 00:15:26.725 Submission Queue Entry Size 00:15:26.725 Max: 64 00:15:26.725 Min: 64 00:15:26.725 Completion Queue Entry Size 00:15:26.725 Max: 16 00:15:26.725 Min: 16 00:15:26.725 Number of Namespaces: 32 00:15:26.725 Compare Command: Supported 00:15:26.725 Write Uncorrectable Command: Not Supported 00:15:26.725 Dataset Management Command: Supported 00:15:26.725 Write Zeroes Command: Supported 00:15:26.725 Set Features Save Field: Not Supported 00:15:26.725 Reservations: Not Supported 00:15:26.725 Timestamp: Not Supported 00:15:26.725 Copy: Supported 00:15:26.725 Volatile Write Cache: Present 00:15:26.725 Atomic Write Unit (Normal): 1 00:15:26.725 Atomic Write Unit (PFail): 1 00:15:26.725 Atomic Compare & Write Unit: 1 00:15:26.725 Fused Compare & Write: Supported 00:15:26.725 Scatter-Gather List 00:15:26.725 SGL Command Set: Supported (Dword aligned) 00:15:26.725 SGL Keyed: Not Supported 00:15:26.725 SGL Bit Bucket Descriptor: Not Supported 00:15:26.725 SGL Metadata Pointer: Not Supported 00:15:26.725 Oversized SGL: Not Supported 00:15:26.725 SGL Metadata Address: Not Supported 00:15:26.725 SGL Offset: Not Supported 00:15:26.725 Transport SGL Data Block: Not Supported 00:15:26.725 Replay Protected Memory Block: Not Supported 00:15:26.725 00:15:26.725 Firmware Slot Information 00:15:26.725 ========================= 00:15:26.725 Active slot: 1 00:15:26.725 Slot 1 Firmware Revision: 25.01 00:15:26.725 00:15:26.725 00:15:26.725 Commands Supported and Effects 00:15:26.725 ============================== 00:15:26.725 Admin Commands 00:15:26.725 -------------- 00:15:26.725 Get Log Page (02h): Supported 00:15:26.725 Identify (06h): Supported 00:15:26.725 Abort (08h): Supported 00:15:26.725 Set Features (09h): Supported 00:15:26.725 Get Features (0Ah): Supported 00:15:26.725 Asynchronous Event Request (0Ch): Supported 00:15:26.725 Keep Alive (18h): Supported 00:15:26.725 I/O Commands 00:15:26.725 ------------ 00:15:26.725 Flush (00h): Supported LBA-Change 00:15:26.725 Write (01h): Supported LBA-Change 00:15:26.725 Read (02h): Supported 00:15:26.725 Compare (05h): Supported 00:15:26.725 Write Zeroes (08h): Supported LBA-Change 00:15:26.725 Dataset Management (09h): Supported LBA-Change 00:15:26.725 Copy (19h): Supported LBA-Change 00:15:26.725 00:15:26.725 Error Log 00:15:26.725 ========= 00:15:26.725 00:15:26.725 Arbitration 00:15:26.725 =========== 00:15:26.725 Arbitration Burst: 1 00:15:26.725 00:15:26.725 Power Management 00:15:26.725 ================ 00:15:26.725 Number of Power States: 1 00:15:26.725 Current Power State: Power State #0 00:15:26.725 Power State #0: 00:15:26.725 Max Power: 0.00 W 00:15:26.725 Non-Operational State: Operational 00:15:26.725 Entry Latency: Not Reported 00:15:26.725 Exit Latency: Not Reported 00:15:26.725 Relative Read Throughput: 0 00:15:26.725 Relative Read Latency: 0 00:15:26.725 Relative Write Throughput: 0 00:15:26.725 Relative Write Latency: 0 00:15:26.725 Idle Power: Not Reported 00:15:26.725 Active Power: Not Reported 00:15:26.725 Non-Operational Permissive Mode: Not Supported 00:15:26.725 00:15:26.725 Health Information 00:15:26.725 ================== 00:15:26.725 Critical Warnings: 00:15:26.725 Available Spare Space: OK 00:15:26.725 Temperature: OK 00:15:26.725 Device Reliability: OK 00:15:26.725 Read Only: No 00:15:26.725 Volatile Memory Backup: OK 00:15:26.725 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:26.725 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:26.725 Available Spare: 0% 00:15:26.725 Available Sp[2024-12-10 14:17:27.457338] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:26.984 [2024-12-10 14:17:27.465224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:26.984 [2024-12-10 14:17:27.465255] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:26.984 [2024-12-10 14:17:27.465264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.984 [2024-12-10 14:17:27.465269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.984 [2024-12-10 14:17:27.465275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.984 [2024-12-10 14:17:27.465280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.984 [2024-12-10 14:17:27.465318] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:26.984 [2024-12-10 14:17:27.465328] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:26.984 [2024-12-10 14:17:27.466321] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:26.984 [2024-12-10 14:17:27.466365] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:26.984 [2024-12-10 14:17:27.466371] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:26.984 [2024-12-10 14:17:27.467325] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:26.984 [2024-12-10 14:17:27.467338] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:26.984 [2024-12-10 14:17:27.467385] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:26.984 [2024-12-10 14:17:27.468351] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:26.984 are Threshold: 0% 00:15:26.984 Life Percentage Used: 0% 00:15:26.984 Data Units Read: 0 00:15:26.984 Data Units Written: 0 00:15:26.984 Host Read Commands: 0 00:15:26.984 Host Write Commands: 0 00:15:26.984 Controller Busy Time: 0 minutes 00:15:26.984 Power Cycles: 0 00:15:26.984 Power On Hours: 0 hours 00:15:26.984 Unsafe Shutdowns: 0 00:15:26.984 Unrecoverable Media Errors: 0 00:15:26.984 Lifetime Error Log Entries: 0 00:15:26.984 Warning Temperature Time: 0 minutes 00:15:26.984 Critical Temperature Time: 0 minutes 00:15:26.984 00:15:26.984 Number of Queues 00:15:26.984 ================ 00:15:26.984 Number of I/O Submission Queues: 127 00:15:26.984 Number of I/O Completion Queues: 127 00:15:26.984 00:15:26.984 Active Namespaces 00:15:26.984 ================= 00:15:26.984 Namespace ID:1 00:15:26.984 Error Recovery Timeout: Unlimited 00:15:26.984 Command Set Identifier: NVM (00h) 00:15:26.984 Deallocate: Supported 00:15:26.984 Deallocated/Unwritten Error: Not Supported 00:15:26.984 Deallocated Read Value: Unknown 00:15:26.984 Deallocate in Write Zeroes: Not Supported 00:15:26.984 Deallocated Guard Field: 0xFFFF 00:15:26.984 Flush: Supported 00:15:26.984 Reservation: Supported 00:15:26.984 Namespace Sharing Capabilities: Multiple Controllers 00:15:26.984 Size (in LBAs): 131072 (0GiB) 00:15:26.984 Capacity (in LBAs): 131072 (0GiB) 00:15:26.984 Utilization (in LBAs): 131072 (0GiB) 00:15:26.984 NGUID: 3C0BFA6668DC40FA888E6AC7067FAA2F 00:15:26.984 UUID: 3c0bfa66-68dc-40fa-888e-6ac7067faa2f 00:15:26.984 Thin Provisioning: Not Supported 00:15:26.984 Per-NS Atomic Units: Yes 00:15:26.984 Atomic Boundary Size (Normal): 0 00:15:26.984 Atomic Boundary Size (PFail): 0 00:15:26.984 Atomic Boundary Offset: 0 00:15:26.984 Maximum Single Source Range Length: 65535 00:15:26.984 Maximum Copy Length: 65535 00:15:26.984 Maximum Source Range Count: 1 00:15:26.984 NGUID/EUI64 Never Reused: No 00:15:26.984 Namespace Write Protected: No 00:15:26.984 Number of LBA Formats: 1 00:15:26.984 Current LBA Format: LBA Format #00 00:15:26.984 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:26.984 00:15:26.984 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:26.984 [2024-12-10 14:17:27.697473] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:32.253 Initializing NVMe Controllers 00:15:32.253 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:32.253 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:32.253 Initialization complete. Launching workers. 00:15:32.253 ======================================================== 00:15:32.253 Latency(us) 00:15:32.253 Device Information : IOPS MiB/s Average min max 00:15:32.253 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39945.14 156.04 3203.98 980.76 6633.89 00:15:32.253 ======================================================== 00:15:32.253 Total : 39945.14 156.04 3203.98 980.76 6633.89 00:15:32.253 00:15:32.253 [2024-12-10 14:17:32.801469] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:32.253 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:32.511 [2024-12-10 14:17:33.043201] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:37.782 Initializing NVMe Controllers 00:15:37.782 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:37.782 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:37.782 Initialization complete. Launching workers. 00:15:37.782 ======================================================== 00:15:37.782 Latency(us) 00:15:37.782 Device Information : IOPS MiB/s Average min max 00:15:37.782 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39894.08 155.84 3208.33 970.81 10351.27 00:15:37.782 ======================================================== 00:15:37.782 Total : 39894.08 155.84 3208.33 970.81 10351.27 00:15:37.782 00:15:37.782 [2024-12-10 14:17:38.062311] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:37.782 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:37.782 [2024-12-10 14:17:38.275583] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.053 [2024-12-10 14:17:43.418304] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.053 Initializing NVMe Controllers 00:15:43.053 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:43.053 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:43.053 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:43.053 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:43.053 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:43.053 Initialization complete. Launching workers. 00:15:43.053 Starting thread on core 2 00:15:43.053 Starting thread on core 3 00:15:43.053 Starting thread on core 1 00:15:43.053 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:43.053 [2024-12-10 14:17:43.712717] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:46.342 [2024-12-10 14:17:46.770513] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:46.342 Initializing NVMe Controllers 00:15:46.342 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:46.342 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:46.342 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:46.342 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:46.342 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:46.342 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:46.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:46.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:46.342 Initialization complete. Launching workers. 00:15:46.342 Starting thread on core 1 with urgent priority queue 00:15:46.342 Starting thread on core 2 with urgent priority queue 00:15:46.342 Starting thread on core 3 with urgent priority queue 00:15:46.342 Starting thread on core 0 with urgent priority queue 00:15:46.343 SPDK bdev Controller (SPDK2 ) core 0: 8406.67 IO/s 11.90 secs/100000 ios 00:15:46.343 SPDK bdev Controller (SPDK2 ) core 1: 8416.00 IO/s 11.88 secs/100000 ios 00:15:46.343 SPDK bdev Controller (SPDK2 ) core 2: 8061.67 IO/s 12.40 secs/100000 ios 00:15:46.343 SPDK bdev Controller (SPDK2 ) core 3: 9990.67 IO/s 10.01 secs/100000 ios 00:15:46.343 ======================================================== 00:15:46.343 00:15:46.343 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:46.343 [2024-12-10 14:17:47.064597] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:46.343 Initializing NVMe Controllers 00:15:46.343 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:46.343 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:46.343 Namespace ID: 1 size: 0GB 00:15:46.343 Initialization complete. 00:15:46.343 INFO: using host memory buffer for IO 00:15:46.343 Hello world! 00:15:46.343 [2024-12-10 14:17:47.077688] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:46.601 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:46.859 [2024-12-10 14:17:47.356993] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:47.795 Initializing NVMe Controllers 00:15:47.795 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:47.795 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:47.795 Initialization complete. Launching workers. 00:15:47.795 submit (in ns) avg, min, max = 7446.0, 3122.9, 3999405.7 00:15:47.795 complete (in ns) avg, min, max = 19515.3, 1711.4, 3999375.2 00:15:47.795 00:15:47.795 Submit histogram 00:15:47.795 ================ 00:15:47.795 Range in us Cumulative Count 00:15:47.795 3.109 - 3.124: 0.0060% ( 1) 00:15:47.795 3.124 - 3.139: 0.0120% ( 1) 00:15:47.795 3.139 - 3.154: 0.0180% ( 1) 00:15:47.795 3.154 - 3.170: 0.0240% ( 1) 00:15:47.795 3.170 - 3.185: 0.0479% ( 4) 00:15:47.795 3.185 - 3.200: 0.2277% ( 30) 00:15:47.795 3.200 - 3.215: 1.9295% ( 284) 00:15:47.795 3.215 - 3.230: 6.0223% ( 683) 00:15:47.795 3.230 - 3.246: 11.0558% ( 840) 00:15:47.795 3.246 - 3.261: 16.0834% ( 839) 00:15:47.795 3.261 - 3.276: 22.7768% ( 1117) 00:15:47.795 3.276 - 3.291: 28.7332% ( 994) 00:15:47.795 3.291 - 3.307: 34.2761% ( 925) 00:15:47.795 3.307 - 3.322: 40.4842% ( 1036) 00:15:47.795 3.322 - 3.337: 45.9432% ( 911) 00:15:47.795 3.337 - 3.352: 50.8509% ( 819) 00:15:47.795 3.352 - 3.368: 56.8313% ( 998) 00:15:47.795 3.368 - 3.383: 65.1306% ( 1385) 00:15:47.795 3.383 - 3.398: 69.9365% ( 802) 00:15:47.795 3.398 - 3.413: 75.6412% ( 952) 00:15:47.795 3.413 - 3.429: 80.3631% ( 788) 00:15:47.795 3.429 - 3.444: 83.3293% ( 495) 00:15:47.795 3.444 - 3.459: 85.5046% ( 363) 00:15:47.795 3.459 - 3.474: 86.8888% ( 231) 00:15:47.795 3.474 - 3.490: 87.6438% ( 126) 00:15:47.795 3.490 - 3.505: 88.1771% ( 89) 00:15:47.795 3.505 - 3.520: 88.8782% ( 117) 00:15:47.795 3.520 - 3.535: 89.5674% ( 115) 00:15:47.795 3.535 - 3.550: 90.3823% ( 136) 00:15:47.795 3.550 - 3.566: 91.3710% ( 165) 00:15:47.795 3.566 - 3.581: 92.2519% ( 147) 00:15:47.795 3.581 - 3.596: 93.0070% ( 126) 00:15:47.795 3.596 - 3.611: 93.8399% ( 139) 00:15:47.795 3.611 - 3.627: 94.7208% ( 147) 00:15:47.795 3.627 - 3.642: 95.6855% ( 161) 00:15:47.795 3.642 - 3.657: 96.5424% ( 143) 00:15:47.795 3.657 - 3.672: 97.2975% ( 126) 00:15:47.795 3.672 - 3.688: 97.8907% ( 99) 00:15:47.795 3.688 - 3.703: 98.3341% ( 74) 00:15:47.795 3.703 - 3.718: 98.6517% ( 53) 00:15:47.795 3.718 - 3.733: 98.9513% ( 50) 00:15:47.795 3.733 - 3.749: 99.1731% ( 37) 00:15:47.795 3.749 - 3.764: 99.3109% ( 23) 00:15:47.795 3.764 - 3.779: 99.4008% ( 15) 00:15:47.795 3.779 - 3.794: 99.4907% ( 15) 00:15:47.795 3.794 - 3.810: 99.5266% ( 6) 00:15:47.795 3.810 - 3.825: 99.5686% ( 7) 00:15:47.795 3.825 - 3.840: 99.5865% ( 3) 00:15:47.795 3.840 - 3.855: 99.5925% ( 1) 00:15:47.795 3.855 - 3.870: 99.6045% ( 2) 00:15:47.795 3.870 - 3.886: 99.6105% ( 1) 00:15:47.795 3.992 - 4.023: 99.6165% ( 1) 00:15:47.795 4.998 - 5.029: 99.6225% ( 1) 00:15:47.795 5.150 - 5.181: 99.6285% ( 1) 00:15:47.795 5.242 - 5.272: 99.6345% ( 1) 00:15:47.795 5.303 - 5.333: 99.6405% ( 1) 00:15:47.795 5.364 - 5.394: 99.6524% ( 2) 00:15:47.795 5.394 - 5.425: 99.6584% ( 1) 00:15:47.795 5.516 - 5.547: 99.6644% ( 1) 00:15:47.795 5.547 - 5.577: 99.6764% ( 2) 00:15:47.795 5.669 - 5.699: 99.6824% ( 1) 00:15:47.795 5.851 - 5.882: 99.6884% ( 1) 00:15:47.795 5.882 - 5.912: 99.6944% ( 1) 00:15:47.795 6.004 - 6.034: 99.7004% ( 1) 00:15:47.795 6.187 - 6.217: 99.7064% ( 1) 00:15:47.795 6.248 - 6.278: 99.7124% ( 1) 00:15:47.795 6.339 - 6.370: 99.7244% ( 2) 00:15:47.795 6.370 - 6.400: 99.7303% ( 1) 00:15:47.795 6.400 - 6.430: 99.7363% ( 1) 00:15:47.795 6.461 - 6.491: 99.7423% ( 1) 00:15:47.795 6.674 - 6.705: 99.7483% ( 1) 00:15:47.795 6.735 - 6.766: 99.7603% ( 2) 00:15:47.795 6.918 - 6.949: 99.7663% ( 1) 00:15:47.795 6.979 - 7.010: 99.7723% ( 1) 00:15:47.795 7.192 - 7.223: 99.7783% ( 1) 00:15:47.795 7.223 - 7.253: 99.7843% ( 1) 00:15:47.795 7.284 - 7.314: 99.7903% ( 1) 00:15:47.795 7.467 - 7.497: 99.7963% ( 1) 00:15:47.795 7.528 - 7.558: 99.8023% ( 1) 00:15:47.795 7.924 - 7.985: 99.8082% ( 1) 00:15:47.795 7.985 - 8.046: 99.8142% ( 1) 00:15:47.795 8.107 - 8.168: 99.8202% ( 1) 00:15:47.795 8.168 - 8.229: 99.8322% ( 2) 00:15:47.795 8.229 - 8.290: 99.8382% ( 1) 00:15:47.795 8.350 - 8.411: 99.8442% ( 1) 00:15:47.795 8.594 - 8.655: 99.8502% ( 1) 00:15:47.795 9.021 - 9.082: 99.8562% ( 1) 00:15:47.795 9.326 - 9.387: 99.8622% ( 1) 00:15:47.795 9.630 - 9.691: 99.8682% ( 1) 00:15:47.795 10.118 - 10.179: 99.8742% ( 1) 00:15:47.795 13.044 - 13.105: 99.8802% ( 1) 00:15:47.795 13.531 - 13.592: 99.8861% ( 1) 00:15:47.795 15.299 - 15.360: 99.8921% ( 1) 00:15:47.795 19.261 - 19.383: 99.8981% ( 1) 00:15:47.795 3994.575 - 4025.783: 100.0000% ( 17) 00:15:47.795 00:15:47.795 Complete histogram 00:15:47.795 ================== 00:15:47.795 Range in us Cumulative Count 00:15:47.795 1.707 - 1.714: 0.0060% ( 1) 00:15:47.795 1.714 - 1.722: 0.0419% ( 6) 00:15:47.795 1.722 - 1.730: 0.1079% ( 11) 00:15:47.795 1.730 - 1.737: 0.1558% ( 8) 00:15:47.795 1.737 - 1.745: 0.1678% ( 2) 00:15:47.795 1.745 - 1.752: 0.1738% ( 1) 00:15:47.795 1.752 - 1.760: 0.4075% ( 39) 00:15:47.795 1.760 - 1.768: 4.7399% ( 723) 00:15:47.795 1.768 - 1.775: 25.0839% ( 3395) 00:15:47.795 1.775 - 1.783: 47.6690% ( 3769) 00:15:47.795 1.783 - 1.790: 55.8965% ( 1373) 00:15:47.795 1.790 - 1.798: 58.5570% ( 444) 00:15:47.795 1.798 - 1.806: 60.7083% ( 359) 00:15:47.795 1.806 - 1.813: 64.9808% ( 713) 00:15:47.795 1.813 - 1.821: 76.4501% ( 1914) 00:15:47.795 1.821 - 1.829: 88.2730% ( 1973) 00:15:47.795 1.829 - 1.836: 93.3725% ( 851) 00:15:47.795 1.836 - 1.844: 95.0623% ( 282) 00:15:47.796 1.844 - 1.851: 96.4226% ( 227) 00:15:47.796 1.851 - 1.859: 97.2615% ( 140) 00:15:47.796 1.859 - 1.867: 97.7589% ( 83) 00:15:47.796 1.867 - 1.874: 97.9446% ( 31) 00:15:47.796 1.874 - 1.882: 98.2023% ( 43) 00:15:47.796 1.882 - 1.890: 98.4540% ( 42) 00:15:47.796 1.890 - 1.897: 98.6877% ( 39) 00:15:47.796 1.897 - 1.905: 98.8914% ( 34) 00:15:47.796 1.905 - 1.912: 99.0352% ( 24) 00:15:47.796 1.912 - 1.920: 99.1191% ( 14) 00:15:47.796 1.920 - 1.928: 99.1431% ( 4) 00:15:47.796 1.928 - 1.935: 99.2090% ( 11) 00:15:47.796 1.935 - 1.943: 99.2270% ( 3) 00:15:47.796 1.943 - 1.950: 99.2330% ( 1) 00:15:47.796 1.950 - 1.966: 99.2629% ( 5) 00:15:47.796 1.966 - 1.981: 99.2749% ( 2) 00:15:47.796 1.981 - 1.996: 99.2929% ( 3) 00:15:47.796 2.042 - 2.057: 99.2989% ( 1) 00:15:47.796 2.072 - 2.088: 99.3049% ( 1) 00:15:47.796 2.103 - 2.118: 99.3109% ( 1) 00:15:47.796 2.133 - 2.149: 99.3169% ( 1) 00:15:47.796 2.149 - 2.164: 99.3229% ( 1) 00:15:47.796 2.194 - 2.210: 99.3468% ( 4) 00:15:47.796 2.210 - 2.225: 99.3528% ( 1) 00:15:47.796 2.225 - 2.240: 99.3588% ( 1) 00:15:47.796 2.270 - 2.286: 99.3648% ( 1) 00:15:47.796 2.392 - 2.408: 99.3708% ( 1) 00:15:47.796 2.545 - 2.560: 99.3768% ( 1) 00:15:47.796 3.444 - 3.459: 99.3828% ( 1) 00:15:47.796 3.596 - 3.611: 99.3888% ( 1) 00:15:47.796 3.657 - 3.672: 99.3948% ( 1) 00:15:47.796 3.733 - 3.749: 99.4008% ( 1) 00:15:47.796 3.794 - 3.810: 99.4068% ( 1) 00:15:47.796 3.931 - 3.962: 99.4128% ( 1) 00:15:47.796 3.992 - 4.023: 99.4187% ( 1) 00:15:47.796 4.023 - 4.053: 99.4247% ( 1) 00:15:47.796 4.053 - 4.084: 99.4307% ( 1) 00:15:47.796 4.632 - 4.663: 99.4367% ( 1) 00:15:47.796 5.059 - 5.090: 99.4427% ( 1) 00:15:47.796 5.211 - 5.242: 99.4487% ( 1) 00:15:47.796 5.455 - 5.486: 99.4547% ( 1) 00:15:47.796 5.486 - 5.516: 99.4667% ( 2) 00:15:47.796 5.547 - 5.577: 99.4727% ( 1) 00:15:47.796 5.577 - 5.608: 99.4787% ( 1) 00:15:47.796 5.638 - 5.669: 99.4847% ( 1) 00:15:47.796 6.034 - 6.065: 99.4966% ( 2) 00:15:47.796 6.187 - 6.217: 99.5026% ( 1) 00:15:47.796 6.735 - 6.766: 99.5086% ( 1) 00:15:47.796 6.888 - 6.918: 99.5146% ( 1) 00:15:47.796 6.949 - 6.979: 99.5206% ( 1) 00:15:47.796 7.406 - 7.436: 99.5266% ( 1) 00:15:47.796 7.558 - 7.5[2024-12-10 14:17:48.451231] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:47.796 89: 99.5326% ( 1) 00:15:47.796 7.619 - 7.650: 99.5386% ( 1) 00:15:47.796 9.996 - 10.057: 99.5446% ( 1) 00:15:47.796 13.714 - 13.775: 99.5506% ( 1) 00:15:47.796 17.554 - 17.676: 99.5566% ( 1) 00:15:47.796 3994.575 - 4025.783: 100.0000% ( 74) 00:15:47.796 00:15:47.796 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:47.796 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:47.796 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:47.796 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:47.796 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:48.055 [ 00:15:48.055 { 00:15:48.055 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:48.055 "subtype": "Discovery", 00:15:48.055 "listen_addresses": [], 00:15:48.055 "allow_any_host": true, 00:15:48.055 "hosts": [] 00:15:48.055 }, 00:15:48.055 { 00:15:48.055 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:48.055 "subtype": "NVMe", 00:15:48.055 "listen_addresses": [ 00:15:48.055 { 00:15:48.055 "trtype": "VFIOUSER", 00:15:48.055 "adrfam": "IPv4", 00:15:48.055 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:48.055 "trsvcid": "0" 00:15:48.055 } 00:15:48.055 ], 00:15:48.055 "allow_any_host": true, 00:15:48.055 "hosts": [], 00:15:48.055 "serial_number": "SPDK1", 00:15:48.055 "model_number": "SPDK bdev Controller", 00:15:48.055 "max_namespaces": 32, 00:15:48.055 "min_cntlid": 1, 00:15:48.055 "max_cntlid": 65519, 00:15:48.055 "namespaces": [ 00:15:48.055 { 00:15:48.055 "nsid": 1, 00:15:48.055 "bdev_name": "Malloc1", 00:15:48.055 "name": "Malloc1", 00:15:48.055 "nguid": "4A2048B6AD324DC1A0F5BE5EF421254D", 00:15:48.055 "uuid": "4a2048b6-ad32-4dc1-a0f5-be5ef421254d" 00:15:48.055 }, 00:15:48.055 { 00:15:48.055 "nsid": 2, 00:15:48.055 "bdev_name": "Malloc3", 00:15:48.055 "name": "Malloc3", 00:15:48.055 "nguid": "01340A3081D24C4AAD3EC53BF506C9DD", 00:15:48.055 "uuid": "01340a30-81d2-4c4a-ad3e-c53bf506c9dd" 00:15:48.055 } 00:15:48.055 ] 00:15:48.055 }, 00:15:48.055 { 00:15:48.055 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:48.055 "subtype": "NVMe", 00:15:48.055 "listen_addresses": [ 00:15:48.055 { 00:15:48.055 "trtype": "VFIOUSER", 00:15:48.055 "adrfam": "IPv4", 00:15:48.055 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:48.055 "trsvcid": "0" 00:15:48.055 } 00:15:48.055 ], 00:15:48.055 "allow_any_host": true, 00:15:48.055 "hosts": [], 00:15:48.055 "serial_number": "SPDK2", 00:15:48.055 "model_number": "SPDK bdev Controller", 00:15:48.055 "max_namespaces": 32, 00:15:48.055 "min_cntlid": 1, 00:15:48.055 "max_cntlid": 65519, 00:15:48.055 "namespaces": [ 00:15:48.055 { 00:15:48.055 "nsid": 1, 00:15:48.055 "bdev_name": "Malloc2", 00:15:48.055 "name": "Malloc2", 00:15:48.055 "nguid": "3C0BFA6668DC40FA888E6AC7067FAA2F", 00:15:48.055 "uuid": "3c0bfa66-68dc-40fa-888e-6ac7067faa2f" 00:15:48.055 } 00:15:48.055 ] 00:15:48.055 } 00:15:48.055 ] 00:15:48.055 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:48.055 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1612854 00:15:48.055 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:48.055 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:48.055 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:48.055 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:48.055 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:48.055 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:48.055 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:48.055 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:48.314 [2024-12-10 14:17:48.869611] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:48.314 Malloc4 00:15:48.314 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:48.573 [2024-12-10 14:17:49.098269] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:48.573 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:48.573 Asynchronous Event Request test 00:15:48.573 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:48.573 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:48.573 Registering asynchronous event callbacks... 00:15:48.573 Starting namespace attribute notice tests for all controllers... 00:15:48.573 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:48.573 aer_cb - Changed Namespace 00:15:48.573 Cleaning up... 00:15:48.573 [ 00:15:48.573 { 00:15:48.573 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:48.573 "subtype": "Discovery", 00:15:48.573 "listen_addresses": [], 00:15:48.573 "allow_any_host": true, 00:15:48.573 "hosts": [] 00:15:48.573 }, 00:15:48.573 { 00:15:48.573 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:48.573 "subtype": "NVMe", 00:15:48.573 "listen_addresses": [ 00:15:48.573 { 00:15:48.573 "trtype": "VFIOUSER", 00:15:48.573 "adrfam": "IPv4", 00:15:48.573 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:48.573 "trsvcid": "0" 00:15:48.573 } 00:15:48.573 ], 00:15:48.573 "allow_any_host": true, 00:15:48.573 "hosts": [], 00:15:48.573 "serial_number": "SPDK1", 00:15:48.573 "model_number": "SPDK bdev Controller", 00:15:48.573 "max_namespaces": 32, 00:15:48.573 "min_cntlid": 1, 00:15:48.573 "max_cntlid": 65519, 00:15:48.573 "namespaces": [ 00:15:48.573 { 00:15:48.573 "nsid": 1, 00:15:48.573 "bdev_name": "Malloc1", 00:15:48.573 "name": "Malloc1", 00:15:48.573 "nguid": "4A2048B6AD324DC1A0F5BE5EF421254D", 00:15:48.573 "uuid": "4a2048b6-ad32-4dc1-a0f5-be5ef421254d" 00:15:48.573 }, 00:15:48.573 { 00:15:48.573 "nsid": 2, 00:15:48.573 "bdev_name": "Malloc3", 00:15:48.573 "name": "Malloc3", 00:15:48.573 "nguid": "01340A3081D24C4AAD3EC53BF506C9DD", 00:15:48.573 "uuid": "01340a30-81d2-4c4a-ad3e-c53bf506c9dd" 00:15:48.573 } 00:15:48.573 ] 00:15:48.573 }, 00:15:48.573 { 00:15:48.573 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:48.573 "subtype": "NVMe", 00:15:48.573 "listen_addresses": [ 00:15:48.573 { 00:15:48.573 "trtype": "VFIOUSER", 00:15:48.573 "adrfam": "IPv4", 00:15:48.573 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:48.573 "trsvcid": "0" 00:15:48.573 } 00:15:48.573 ], 00:15:48.573 "allow_any_host": true, 00:15:48.573 "hosts": [], 00:15:48.573 "serial_number": "SPDK2", 00:15:48.573 "model_number": "SPDK bdev Controller", 00:15:48.573 "max_namespaces": 32, 00:15:48.573 "min_cntlid": 1, 00:15:48.573 "max_cntlid": 65519, 00:15:48.573 "namespaces": [ 00:15:48.573 { 00:15:48.573 "nsid": 1, 00:15:48.573 "bdev_name": "Malloc2", 00:15:48.573 "name": "Malloc2", 00:15:48.573 "nguid": "3C0BFA6668DC40FA888E6AC7067FAA2F", 00:15:48.573 "uuid": "3c0bfa66-68dc-40fa-888e-6ac7067faa2f" 00:15:48.573 }, 00:15:48.573 { 00:15:48.573 "nsid": 2, 00:15:48.573 "bdev_name": "Malloc4", 00:15:48.573 "name": "Malloc4", 00:15:48.573 "nguid": "61B91F5EDBB5470ABB202F4EA8866AC0", 00:15:48.573 "uuid": "61b91f5e-dbb5-470a-bb20-2f4ea8866ac0" 00:15:48.573 } 00:15:48.573 ] 00:15:48.573 } 00:15:48.573 ] 00:15:48.573 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1612854 00:15:48.573 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:48.573 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1605257 00:15:48.573 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1605257 ']' 00:15:48.573 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1605257 00:15:48.832 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:48.833 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.833 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1605257 00:15:48.833 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:48.833 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:48.833 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1605257' 00:15:48.833 killing process with pid 1605257 00:15:48.833 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1605257 00:15:48.833 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1605257 00:15:49.092 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:49.092 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:49.092 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:49.092 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:49.092 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:49.092 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:49.092 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1613007 00:15:49.092 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1613007' 00:15:49.092 Process pid: 1613007 00:15:49.092 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:49.092 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1613007 00:15:49.092 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1613007 ']' 00:15:49.092 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.092 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.092 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.092 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.092 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:49.092 [2024-12-10 14:17:49.648206] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:49.092 [2024-12-10 14:17:49.649070] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:15:49.092 [2024-12-10 14:17:49.649106] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.092 [2024-12-10 14:17:49.729448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:49.092 [2024-12-10 14:17:49.765085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.092 [2024-12-10 14:17:49.765122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.092 [2024-12-10 14:17:49.765130] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.092 [2024-12-10 14:17:49.765135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.092 [2024-12-10 14:17:49.765140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.092 [2024-12-10 14:17:49.766553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.092 [2024-12-10 14:17:49.766664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.092 [2024-12-10 14:17:49.766770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.092 [2024-12-10 14:17:49.766771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.352 [2024-12-10 14:17:49.835364] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:49.352 [2024-12-10 14:17:49.835779] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:49.352 [2024-12-10 14:17:49.836202] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:49.352 [2024-12-10 14:17:49.836374] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:49.352 [2024-12-10 14:17:49.836436] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:49.352 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.352 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:49.352 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:50.288 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:50.547 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:50.547 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:50.547 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:50.547 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:50.547 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:50.547 Malloc1 00:15:50.806 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:50.806 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:51.065 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:51.323 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:51.323 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:51.323 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:51.582 Malloc2 00:15:51.582 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:51.582 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:51.840 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:52.099 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:52.099 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1613007 00:15:52.099 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1613007 ']' 00:15:52.099 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1613007 00:15:52.099 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:52.099 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.099 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1613007 00:15:52.099 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.099 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.099 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1613007' 00:15:52.099 killing process with pid 1613007 00:15:52.099 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1613007 00:15:52.099 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1613007 00:15:52.358 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:52.358 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:52.358 00:15:52.358 real 0m50.749s 00:15:52.358 user 3m15.889s 00:15:52.358 sys 0m3.540s 00:15:52.358 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.358 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:52.358 ************************************ 00:15:52.358 END TEST nvmf_vfio_user 00:15:52.358 ************************************ 00:15:52.358 14:17:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:52.358 14:17:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:52.358 14:17:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.358 14:17:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:52.358 ************************************ 00:15:52.358 START TEST nvmf_vfio_user_nvme_compliance 00:15:52.358 ************************************ 00:15:52.358 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:52.358 * Looking for test storage... 00:15:52.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:52.358 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:52.358 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:15:52.358 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:52.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.618 --rc genhtml_branch_coverage=1 00:15:52.618 --rc genhtml_function_coverage=1 00:15:52.618 --rc genhtml_legend=1 00:15:52.618 --rc geninfo_all_blocks=1 00:15:52.618 --rc geninfo_unexecuted_blocks=1 00:15:52.618 00:15:52.618 ' 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:52.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.618 --rc genhtml_branch_coverage=1 00:15:52.618 --rc genhtml_function_coverage=1 00:15:52.618 --rc genhtml_legend=1 00:15:52.618 --rc geninfo_all_blocks=1 00:15:52.618 --rc geninfo_unexecuted_blocks=1 00:15:52.618 00:15:52.618 ' 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:52.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.618 --rc genhtml_branch_coverage=1 00:15:52.618 --rc genhtml_function_coverage=1 00:15:52.618 --rc genhtml_legend=1 00:15:52.618 --rc geninfo_all_blocks=1 00:15:52.618 --rc geninfo_unexecuted_blocks=1 00:15:52.618 00:15:52.618 ' 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:52.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.618 --rc genhtml_branch_coverage=1 00:15:52.618 --rc genhtml_function_coverage=1 00:15:52.618 --rc genhtml_legend=1 00:15:52.618 --rc geninfo_all_blocks=1 00:15:52.618 --rc geninfo_unexecuted_blocks=1 00:15:52.618 00:15:52.618 ' 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:52.618 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:52.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1613753 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1613753' 00:15:52.619 Process pid: 1613753 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1613753 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1613753 ']' 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.619 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:52.619 [2024-12-10 14:17:53.259907] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:15:52.619 [2024-12-10 14:17:53.259954] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.619 [2024-12-10 14:17:53.338558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:52.878 [2024-12-10 14:17:53.377588] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.878 [2024-12-10 14:17:53.377623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.878 [2024-12-10 14:17:53.377630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.878 [2024-12-10 14:17:53.377636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.878 [2024-12-10 14:17:53.377640] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.878 [2024-12-10 14:17:53.378940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.878 [2024-12-10 14:17:53.379049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.878 [2024-12-10 14:17:53.379050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.878 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.878 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:52.878 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:53.814 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:53.814 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:53.814 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:53.814 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.814 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:53.814 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.814 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:53.814 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:53.814 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.814 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:53.814 malloc0 00:15:53.814 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.814 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:53.814 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.814 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:53.814 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.815 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:53.815 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.815 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:54.073 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.073 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:54.073 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.073 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:54.073 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.073 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:54.073 00:15:54.073 00:15:54.073 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.073 http://cunit.sourceforge.net/ 00:15:54.073 00:15:54.073 00:15:54.073 Suite: nvme_compliance 00:15:54.073 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-10 14:17:54.736753] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.073 [2024-12-10 14:17:54.738095] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:54.073 [2024-12-10 14:17:54.738110] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:54.073 [2024-12-10 14:17:54.738116] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:54.073 [2024-12-10 14:17:54.739774] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.073 passed 00:15:54.332 Test: admin_identify_ctrlr_verify_fused ...[2024-12-10 14:17:54.818272] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.332 [2024-12-10 14:17:54.821298] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.332 passed 00:15:54.332 Test: admin_identify_ns ...[2024-12-10 14:17:54.897494] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.332 [2024-12-10 14:17:54.957225] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:54.332 [2024-12-10 14:17:54.965227] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:54.332 [2024-12-10 14:17:54.986322] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.332 passed 00:15:54.332 Test: admin_get_features_mandatory_features ...[2024-12-10 14:17:55.061881] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.332 [2024-12-10 14:17:55.064896] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.589 passed 00:15:54.590 Test: admin_get_features_optional_features ...[2024-12-10 14:17:55.142390] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.590 [2024-12-10 14:17:55.145411] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.590 passed 00:15:54.590 Test: admin_set_features_number_of_queues ...[2024-12-10 14:17:55.221405] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.848 [2024-12-10 14:17:55.338307] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.848 passed 00:15:54.848 Test: admin_get_log_page_mandatory_logs ...[2024-12-10 14:17:55.414173] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.848 [2024-12-10 14:17:55.417200] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.848 passed 00:15:54.848 Test: admin_get_log_page_with_lpo ...[2024-12-10 14:17:55.489401] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.848 [2024-12-10 14:17:55.561230] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:54.848 [2024-12-10 14:17:55.571274] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.106 passed 00:15:55.106 Test: fabric_property_get ...[2024-12-10 14:17:55.646908] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.107 [2024-12-10 14:17:55.648141] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:55.107 [2024-12-10 14:17:55.649923] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.107 passed 00:15:55.107 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-10 14:17:55.727417] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.107 [2024-12-10 14:17:55.728649] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:55.107 [2024-12-10 14:17:55.730437] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.107 passed 00:15:55.107 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-10 14:17:55.805404] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.365 [2024-12-10 14:17:55.893226] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:55.365 [2024-12-10 14:17:55.909225] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:55.365 [2024-12-10 14:17:55.914312] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.365 passed 00:15:55.365 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-10 14:17:55.988057] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.365 [2024-12-10 14:17:55.989289] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:55.365 [2024-12-10 14:17:55.991072] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.365 passed 00:15:55.365 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-10 14:17:56.067668] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.624 [2024-12-10 14:17:56.143223] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:55.624 [2024-12-10 14:17:56.167223] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:55.624 [2024-12-10 14:17:56.172303] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.624 passed 00:15:55.624 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-10 14:17:56.248687] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.624 [2024-12-10 14:17:56.249913] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:55.624 [2024-12-10 14:17:56.249936] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:55.624 [2024-12-10 14:17:56.251711] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.624 passed 00:15:55.624 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-10 14:17:56.326495] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.883 [2024-12-10 14:17:56.422225] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:55.883 [2024-12-10 14:17:56.430222] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:55.883 [2024-12-10 14:17:56.438227] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:55.883 [2024-12-10 14:17:56.444234] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:55.883 [2024-12-10 14:17:56.475314] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.883 passed 00:15:55.883 Test: admin_create_io_sq_verify_pc ...[2024-12-10 14:17:56.547818] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.883 [2024-12-10 14:17:56.563230] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:55.883 [2024-12-10 14:17:56.580981] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.883 passed 00:15:56.141 Test: admin_create_io_qp_max_qps ...[2024-12-10 14:17:56.657485] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.077 [2024-12-10 14:17:57.768225] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:57.644 [2024-12-10 14:17:58.150036] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.644 passed 00:15:57.644 Test: admin_create_io_sq_shared_cq ...[2024-12-10 14:17:58.224825] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.644 [2024-12-10 14:17:58.356230] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:57.903 [2024-12-10 14:17:58.393284] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.903 passed 00:15:57.903 00:15:57.903 Run Summary: Type Total Ran Passed Failed Inactive 00:15:57.903 suites 1 1 n/a 0 0 00:15:57.903 tests 18 18 18 0 0 00:15:57.903 asserts 360 360 360 0 n/a 00:15:57.903 00:15:57.903 Elapsed time = 1.503 seconds 00:15:57.903 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1613753 00:15:57.903 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1613753 ']' 00:15:57.903 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1613753 00:15:57.903 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:57.903 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.903 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1613753 00:15:57.903 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:57.903 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:57.903 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1613753' 00:15:57.903 killing process with pid 1613753 00:15:57.903 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1613753 00:15:57.903 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1613753 00:15:58.161 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:58.161 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:58.161 00:15:58.161 real 0m5.662s 00:15:58.161 user 0m15.857s 00:15:58.161 sys 0m0.523s 00:15:58.161 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.161 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:58.161 ************************************ 00:15:58.161 END TEST nvmf_vfio_user_nvme_compliance 00:15:58.161 ************************************ 00:15:58.161 14:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:58.161 14:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:58.161 14:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.161 14:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:58.161 ************************************ 00:15:58.161 START TEST nvmf_vfio_user_fuzz 00:15:58.161 ************************************ 00:15:58.161 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:58.161 * Looking for test storage... 00:15:58.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:58.161 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:58.161 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:58.162 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:58.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.421 --rc genhtml_branch_coverage=1 00:15:58.421 --rc genhtml_function_coverage=1 00:15:58.421 --rc genhtml_legend=1 00:15:58.421 --rc geninfo_all_blocks=1 00:15:58.421 --rc geninfo_unexecuted_blocks=1 00:15:58.421 00:15:58.421 ' 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:58.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.421 --rc genhtml_branch_coverage=1 00:15:58.421 --rc genhtml_function_coverage=1 00:15:58.421 --rc genhtml_legend=1 00:15:58.421 --rc geninfo_all_blocks=1 00:15:58.421 --rc geninfo_unexecuted_blocks=1 00:15:58.421 00:15:58.421 ' 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:58.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.421 --rc genhtml_branch_coverage=1 00:15:58.421 --rc genhtml_function_coverage=1 00:15:58.421 --rc genhtml_legend=1 00:15:58.421 --rc geninfo_all_blocks=1 00:15:58.421 --rc geninfo_unexecuted_blocks=1 00:15:58.421 00:15:58.421 ' 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:58.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.421 --rc genhtml_branch_coverage=1 00:15:58.421 --rc genhtml_function_coverage=1 00:15:58.421 --rc genhtml_legend=1 00:15:58.421 --rc geninfo_all_blocks=1 00:15:58.421 --rc geninfo_unexecuted_blocks=1 00:15:58.421 00:15:58.421 ' 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.421 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:58.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1614730 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1614730' 00:15:58.422 Process pid: 1614730 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1614730 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1614730 ']' 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.422 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.680 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.680 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:58.680 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:59.616 malloc0 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:59.616 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:31.811 Fuzzing completed. Shutting down the fuzz application 00:16:31.811 00:16:31.811 Dumping successful admin opcodes: 00:16:31.811 9, 10, 00:16:31.811 Dumping successful io opcodes: 00:16:31.811 0, 00:16:31.811 NS: 0x20000081ef00 I/O qp, Total commands completed: 1002746, total successful commands: 3932, random_seed: 1761524928 00:16:31.811 NS: 0x20000081ef00 admin qp, Total commands completed: 248144, total successful commands: 58, random_seed: 1400126720 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1614730 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1614730 ']' 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1614730 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1614730 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1614730' 00:16:31.811 killing process with pid 1614730 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1614730 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1614730 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:31.811 00:16:31.811 real 0m32.232s 00:16:31.811 user 0m29.549s 00:16:31.811 sys 0m31.760s 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.811 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:31.811 ************************************ 00:16:31.811 END TEST nvmf_vfio_user_fuzz 00:16:31.811 ************************************ 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:31.811 ************************************ 00:16:31.811 START TEST nvmf_auth_target 00:16:31.811 ************************************ 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:31.811 * Looking for test storage... 00:16:31.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:31.811 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:31.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.812 --rc genhtml_branch_coverage=1 00:16:31.812 --rc genhtml_function_coverage=1 00:16:31.812 --rc genhtml_legend=1 00:16:31.812 --rc geninfo_all_blocks=1 00:16:31.812 --rc geninfo_unexecuted_blocks=1 00:16:31.812 00:16:31.812 ' 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:31.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.812 --rc genhtml_branch_coverage=1 00:16:31.812 --rc genhtml_function_coverage=1 00:16:31.812 --rc genhtml_legend=1 00:16:31.812 --rc geninfo_all_blocks=1 00:16:31.812 --rc geninfo_unexecuted_blocks=1 00:16:31.812 00:16:31.812 ' 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:31.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.812 --rc genhtml_branch_coverage=1 00:16:31.812 --rc genhtml_function_coverage=1 00:16:31.812 --rc genhtml_legend=1 00:16:31.812 --rc geninfo_all_blocks=1 00:16:31.812 --rc geninfo_unexecuted_blocks=1 00:16:31.812 00:16:31.812 ' 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:31.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.812 --rc genhtml_branch_coverage=1 00:16:31.812 --rc genhtml_function_coverage=1 00:16:31.812 --rc genhtml_legend=1 00:16:31.812 --rc geninfo_all_blocks=1 00:16:31.812 --rc geninfo_unexecuted_blocks=1 00:16:31.812 00:16:31.812 ' 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:31.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:31.812 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:37.084 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:37.084 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:37.084 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:37.085 Found net devices under 0000:af:00.0: cvl_0_0 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:37.085 Found net devices under 0000:af:00.1: cvl_0_1 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:37.085 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:37.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:37.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:16:37.345 00:16:37.345 --- 10.0.0.2 ping statistics --- 00:16:37.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.345 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:37.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:37.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:16:37.345 00:16:37.345 --- 10.0.0.1 ping statistics --- 00:16:37.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.345 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:37.345 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:37.345 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:37.345 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:37.345 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:37.345 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.345 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1623951 00:16:37.345 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1623951 00:16:37.345 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1623951 ']' 00:16:37.345 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:37.345 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.345 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.345 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.345 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.345 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1624190 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3ec1521ca310306fe5417523b976bf7dd971de3a730d2f4c 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.qi2 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3ec1521ca310306fe5417523b976bf7dd971de3a730d2f4c 0 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3ec1521ca310306fe5417523b976bf7dd971de3a730d2f4c 0 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3ec1521ca310306fe5417523b976bf7dd971de3a730d2f4c 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:38.282 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.qi2 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.qi2 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.qi2 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0a88714ad415eb719faacbfa380ea7bab2656f1f8bc6710345f724db0fb73e74 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.8ha 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0a88714ad415eb719faacbfa380ea7bab2656f1f8bc6710345f724db0fb73e74 3 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0a88714ad415eb719faacbfa380ea7bab2656f1f8bc6710345f724db0fb73e74 3 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0a88714ad415eb719faacbfa380ea7bab2656f1f8bc6710345f724db0fb73e74 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.8ha 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.8ha 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.8ha 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f91df4b70ff0d779ef7fde7753894c18 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.fwp 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f91df4b70ff0d779ef7fde7753894c18 1 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f91df4b70ff0d779ef7fde7753894c18 1 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f91df4b70ff0d779ef7fde7753894c18 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.fwp 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.fwp 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.fwp 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4d603e363bf6095b2aed6ba0208386ac3cafdbfa89d9d25f 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.3Sv 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4d603e363bf6095b2aed6ba0208386ac3cafdbfa89d9d25f 2 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4d603e363bf6095b2aed6ba0208386ac3cafdbfa89d9d25f 2 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4d603e363bf6095b2aed6ba0208386ac3cafdbfa89d9d25f 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.3Sv 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.3Sv 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.3Sv 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a42d032aeb0f2ad5dd5871c1e5d54fe209f8cb91b4581282 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.qn9 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a42d032aeb0f2ad5dd5871c1e5d54fe209f8cb91b4581282 2 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a42d032aeb0f2ad5dd5871c1e5d54fe209f8cb91b4581282 2 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a42d032aeb0f2ad5dd5871c1e5d54fe209f8cb91b4581282 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.qn9 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.qn9 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.qn9 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:38.542 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=882ec0a3f0fdf84d591ac671ca680d12 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.uit 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 882ec0a3f0fdf84d591ac671ca680d12 1 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 882ec0a3f0fdf84d591ac671ca680d12 1 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=882ec0a3f0fdf84d591ac671ca680d12 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.uit 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.uit 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.uit 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dbba14dbbe9a8b49f1c55106157acff12b9ce76b823b96066669b8808db60346 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.5gk 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dbba14dbbe9a8b49f1c55106157acff12b9ce76b823b96066669b8808db60346 3 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dbba14dbbe9a8b49f1c55106157acff12b9ce76b823b96066669b8808db60346 3 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dbba14dbbe9a8b49f1c55106157acff12b9ce76b823b96066669b8808db60346 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.5gk 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.5gk 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.5gk 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1623951 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1623951 ']' 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.802 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.061 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.061 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:39.061 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1624190 /var/tmp/host.sock 00:16:39.061 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1624190 ']' 00:16:39.061 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:39.061 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.061 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:39.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:39.061 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.061 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.061 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.061 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:39.061 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:39.061 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.061 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.320 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.320 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:39.320 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qi2 00:16:39.320 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.320 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.320 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.320 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.qi2 00:16:39.320 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.qi2 00:16:39.320 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.8ha ]] 00:16:39.320 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8ha 00:16:39.320 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.320 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.320 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.320 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8ha 00:16:39.320 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8ha 00:16:39.578 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:39.578 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.fwp 00:16:39.578 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.578 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.578 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.578 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.fwp 00:16:39.578 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.fwp 00:16:39.837 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.3Sv ]] 00:16:39.837 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3Sv 00:16:39.837 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.837 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.837 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.837 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3Sv 00:16:39.837 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3Sv 00:16:40.096 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:40.096 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.qn9 00:16:40.096 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.096 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.096 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.096 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.qn9 00:16:40.096 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.qn9 00:16:40.355 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.uit ]] 00:16:40.355 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uit 00:16:40.355 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.355 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.355 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.355 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uit 00:16:40.355 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uit 00:16:40.355 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:40.355 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.5gk 00:16:40.355 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.355 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.355 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.355 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.5gk 00:16:40.355 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.5gk 00:16:40.614 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:40.614 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:40.614 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.614 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.614 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.614 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.873 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:40.873 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.873 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.873 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:40.873 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:40.873 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.873 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.873 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.873 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.873 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.873 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.873 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.873 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.132 00:16:41.132 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.132 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.132 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.391 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.391 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.391 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.391 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.391 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.391 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.391 { 00:16:41.391 "cntlid": 1, 00:16:41.391 "qid": 0, 00:16:41.391 "state": "enabled", 00:16:41.391 "thread": "nvmf_tgt_poll_group_000", 00:16:41.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:41.391 "listen_address": { 00:16:41.391 "trtype": "TCP", 00:16:41.391 "adrfam": "IPv4", 00:16:41.391 "traddr": "10.0.0.2", 00:16:41.391 "trsvcid": "4420" 00:16:41.391 }, 00:16:41.391 "peer_address": { 00:16:41.391 "trtype": "TCP", 00:16:41.391 "adrfam": "IPv4", 00:16:41.391 "traddr": "10.0.0.1", 00:16:41.391 "trsvcid": "48236" 00:16:41.391 }, 00:16:41.391 "auth": { 00:16:41.391 "state": "completed", 00:16:41.391 "digest": "sha256", 00:16:41.391 "dhgroup": "null" 00:16:41.391 } 00:16:41.391 } 00:16:41.391 ]' 00:16:41.391 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.391 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.391 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.391 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:41.391 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.391 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.391 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.391 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.650 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:16:41.650 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:16:42.218 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.218 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:42.218 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.218 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.218 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.218 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.218 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:42.218 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:42.477 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:42.477 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.477 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.477 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:42.477 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:42.477 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.477 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.477 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.477 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.477 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.477 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.477 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.477 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.736 00:16:42.736 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.736 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.736 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.995 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.995 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.995 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.995 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.995 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.995 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.995 { 00:16:42.995 "cntlid": 3, 00:16:42.995 "qid": 0, 00:16:42.995 "state": "enabled", 00:16:42.995 "thread": "nvmf_tgt_poll_group_000", 00:16:42.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:42.995 "listen_address": { 00:16:42.995 "trtype": "TCP", 00:16:42.995 "adrfam": "IPv4", 00:16:42.995 "traddr": "10.0.0.2", 00:16:42.995 "trsvcid": "4420" 00:16:42.995 }, 00:16:42.995 "peer_address": { 00:16:42.995 "trtype": "TCP", 00:16:42.995 "adrfam": "IPv4", 00:16:42.995 "traddr": "10.0.0.1", 00:16:42.995 "trsvcid": "48258" 00:16:42.995 }, 00:16:42.995 "auth": { 00:16:42.995 "state": "completed", 00:16:42.995 "digest": "sha256", 00:16:42.995 "dhgroup": "null" 00:16:42.995 } 00:16:42.995 } 00:16:42.995 ]' 00:16:42.995 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.995 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.995 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.995 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:42.995 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.995 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.995 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.995 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.254 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:16:43.254 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:16:43.822 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.822 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:43.822 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.822 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.822 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.822 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.822 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:43.822 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:44.081 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:44.081 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.081 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.081 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:44.081 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.081 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.081 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.081 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.081 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.081 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.081 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.081 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.081 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.340 00:16:44.341 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.341 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.341 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.599 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.599 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.599 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.599 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.599 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.599 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.599 { 00:16:44.599 "cntlid": 5, 00:16:44.599 "qid": 0, 00:16:44.599 "state": "enabled", 00:16:44.599 "thread": "nvmf_tgt_poll_group_000", 00:16:44.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:44.599 "listen_address": { 00:16:44.599 "trtype": "TCP", 00:16:44.599 "adrfam": "IPv4", 00:16:44.599 "traddr": "10.0.0.2", 00:16:44.599 "trsvcid": "4420" 00:16:44.599 }, 00:16:44.599 "peer_address": { 00:16:44.599 "trtype": "TCP", 00:16:44.599 "adrfam": "IPv4", 00:16:44.599 "traddr": "10.0.0.1", 00:16:44.599 "trsvcid": "51498" 00:16:44.599 }, 00:16:44.599 "auth": { 00:16:44.599 "state": "completed", 00:16:44.599 "digest": "sha256", 00:16:44.599 "dhgroup": "null" 00:16:44.599 } 00:16:44.599 } 00:16:44.599 ]' 00:16:44.599 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.599 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.599 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.599 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:44.599 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.599 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.599 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.599 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.857 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:16:44.857 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:16:45.424 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.424 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:45.424 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.424 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.424 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.424 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.425 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:45.425 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:45.683 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:45.683 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.683 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.683 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:45.683 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.683 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.683 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:45.683 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.683 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.683 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.683 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.683 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.683 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.942 00:16:45.942 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.942 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.942 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.201 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.201 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.201 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.201 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.201 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.201 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.201 { 00:16:46.201 "cntlid": 7, 00:16:46.201 "qid": 0, 00:16:46.201 "state": "enabled", 00:16:46.201 "thread": "nvmf_tgt_poll_group_000", 00:16:46.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:46.201 "listen_address": { 00:16:46.201 "trtype": "TCP", 00:16:46.201 "adrfam": "IPv4", 00:16:46.201 "traddr": "10.0.0.2", 00:16:46.201 "trsvcid": "4420" 00:16:46.201 }, 00:16:46.201 "peer_address": { 00:16:46.201 "trtype": "TCP", 00:16:46.201 "adrfam": "IPv4", 00:16:46.201 "traddr": "10.0.0.1", 00:16:46.201 "trsvcid": "51518" 00:16:46.201 }, 00:16:46.201 "auth": { 00:16:46.201 "state": "completed", 00:16:46.201 "digest": "sha256", 00:16:46.201 "dhgroup": "null" 00:16:46.201 } 00:16:46.201 } 00:16:46.201 ]' 00:16:46.201 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.201 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.201 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.201 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:46.201 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.201 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.201 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.201 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.460 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:16:46.460 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:16:47.028 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.028 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:47.028 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.028 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.028 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.028 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.028 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.028 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:47.028 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:47.286 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:47.286 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.286 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.286 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:47.286 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:47.286 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.286 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.286 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.286 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.286 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.286 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.286 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.286 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.545 00:16:47.545 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.545 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.545 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.803 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.803 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.803 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.803 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.804 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.804 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.804 { 00:16:47.804 "cntlid": 9, 00:16:47.804 "qid": 0, 00:16:47.804 "state": "enabled", 00:16:47.804 "thread": "nvmf_tgt_poll_group_000", 00:16:47.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:47.804 "listen_address": { 00:16:47.804 "trtype": "TCP", 00:16:47.804 "adrfam": "IPv4", 00:16:47.804 "traddr": "10.0.0.2", 00:16:47.804 "trsvcid": "4420" 00:16:47.804 }, 00:16:47.804 "peer_address": { 00:16:47.804 "trtype": "TCP", 00:16:47.804 "adrfam": "IPv4", 00:16:47.804 "traddr": "10.0.0.1", 00:16:47.804 "trsvcid": "51548" 00:16:47.804 }, 00:16:47.804 "auth": { 00:16:47.804 "state": "completed", 00:16:47.804 "digest": "sha256", 00:16:47.804 "dhgroup": "ffdhe2048" 00:16:47.804 } 00:16:47.804 } 00:16:47.804 ]' 00:16:47.804 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.804 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.804 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.804 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:47.804 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.804 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.804 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.804 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.062 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:16:48.062 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:16:48.629 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.629 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:48.629 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.629 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.629 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.629 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.629 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:48.629 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:48.888 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:48.888 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.888 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.888 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:48.888 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:48.888 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.888 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.888 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.888 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.888 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.888 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.888 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.888 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.147 00:16:49.147 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.147 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.147 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.405 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.405 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.405 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.405 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.405 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.405 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.405 { 00:16:49.405 "cntlid": 11, 00:16:49.405 "qid": 0, 00:16:49.405 "state": "enabled", 00:16:49.405 "thread": "nvmf_tgt_poll_group_000", 00:16:49.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:49.405 "listen_address": { 00:16:49.405 "trtype": "TCP", 00:16:49.405 "adrfam": "IPv4", 00:16:49.405 "traddr": "10.0.0.2", 00:16:49.405 "trsvcid": "4420" 00:16:49.405 }, 00:16:49.405 "peer_address": { 00:16:49.405 "trtype": "TCP", 00:16:49.405 "adrfam": "IPv4", 00:16:49.405 "traddr": "10.0.0.1", 00:16:49.405 "trsvcid": "51588" 00:16:49.405 }, 00:16:49.405 "auth": { 00:16:49.405 "state": "completed", 00:16:49.405 "digest": "sha256", 00:16:49.405 "dhgroup": "ffdhe2048" 00:16:49.405 } 00:16:49.405 } 00:16:49.405 ]' 00:16:49.405 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.405 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.405 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.405 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:49.405 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.405 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.405 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.405 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.664 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:16:49.664 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:16:50.232 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.232 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:50.232 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.232 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.232 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.232 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.232 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.232 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.491 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:50.491 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.491 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.491 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:50.491 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.491 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.491 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.491 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.491 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.491 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.491 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.491 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.491 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.750 00:16:50.750 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.750 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.750 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.750 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.750 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.750 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.750 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.750 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.750 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.750 { 00:16:50.750 "cntlid": 13, 00:16:50.750 "qid": 0, 00:16:50.750 "state": "enabled", 00:16:50.750 "thread": "nvmf_tgt_poll_group_000", 00:16:50.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:50.750 "listen_address": { 00:16:50.750 "trtype": "TCP", 00:16:50.750 "adrfam": "IPv4", 00:16:50.750 "traddr": "10.0.0.2", 00:16:50.750 "trsvcid": "4420" 00:16:50.750 }, 00:16:50.750 "peer_address": { 00:16:50.750 "trtype": "TCP", 00:16:50.750 "adrfam": "IPv4", 00:16:50.750 "traddr": "10.0.0.1", 00:16:50.750 "trsvcid": "51620" 00:16:50.750 }, 00:16:50.750 "auth": { 00:16:50.750 "state": "completed", 00:16:50.750 "digest": "sha256", 00:16:50.750 "dhgroup": "ffdhe2048" 00:16:50.750 } 00:16:50.750 } 00:16:50.750 ]' 00:16:50.750 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.009 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.009 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.009 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:51.009 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.009 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.009 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.009 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.268 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:16:51.268 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:16:51.835 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.835 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:51.835 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.835 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.835 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.835 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.835 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:51.835 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:51.835 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:51.835 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.835 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.835 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:51.835 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:51.835 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.835 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:51.835 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.835 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.095 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.095 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.095 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.095 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.095 00:16:52.354 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.354 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.354 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.354 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.354 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.354 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.354 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.354 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.354 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.354 { 00:16:52.354 "cntlid": 15, 00:16:52.354 "qid": 0, 00:16:52.354 "state": "enabled", 00:16:52.354 "thread": "nvmf_tgt_poll_group_000", 00:16:52.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:52.354 "listen_address": { 00:16:52.354 "trtype": "TCP", 00:16:52.354 "adrfam": "IPv4", 00:16:52.354 "traddr": "10.0.0.2", 00:16:52.354 "trsvcid": "4420" 00:16:52.354 }, 00:16:52.354 "peer_address": { 00:16:52.354 "trtype": "TCP", 00:16:52.354 "adrfam": "IPv4", 00:16:52.354 "traddr": "10.0.0.1", 00:16:52.354 "trsvcid": "51644" 00:16:52.354 }, 00:16:52.354 "auth": { 00:16:52.354 "state": "completed", 00:16:52.354 "digest": "sha256", 00:16:52.354 "dhgroup": "ffdhe2048" 00:16:52.354 } 00:16:52.354 } 00:16:52.354 ]' 00:16:52.354 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.612 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.612 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.612 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:52.612 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.612 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.612 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.612 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.871 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:16:52.871 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:16:53.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:53.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:53.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:53.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:53.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:53.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.698 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.698 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.698 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.698 00:16:53.957 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.957 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.957 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.957 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.957 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.957 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.957 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.957 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.957 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.957 { 00:16:53.957 "cntlid": 17, 00:16:53.957 "qid": 0, 00:16:53.957 "state": "enabled", 00:16:53.957 "thread": "nvmf_tgt_poll_group_000", 00:16:53.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:53.957 "listen_address": { 00:16:53.957 "trtype": "TCP", 00:16:53.957 "adrfam": "IPv4", 00:16:53.957 "traddr": "10.0.0.2", 00:16:53.957 "trsvcid": "4420" 00:16:53.957 }, 00:16:53.957 "peer_address": { 00:16:53.957 "trtype": "TCP", 00:16:53.957 "adrfam": "IPv4", 00:16:53.957 "traddr": "10.0.0.1", 00:16:53.957 "trsvcid": "38946" 00:16:53.957 }, 00:16:53.957 "auth": { 00:16:53.957 "state": "completed", 00:16:53.957 "digest": "sha256", 00:16:53.957 "dhgroup": "ffdhe3072" 00:16:53.957 } 00:16:53.957 } 00:16:53.957 ]' 00:16:53.957 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.957 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.957 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.215 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:54.215 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.215 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.216 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.216 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.478 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:16:54.478 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.046 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.305 00:16:55.305 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.305 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.305 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.565 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.565 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.565 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.565 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.565 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.565 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.565 { 00:16:55.565 "cntlid": 19, 00:16:55.565 "qid": 0, 00:16:55.565 "state": "enabled", 00:16:55.565 "thread": "nvmf_tgt_poll_group_000", 00:16:55.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:55.565 "listen_address": { 00:16:55.565 "trtype": "TCP", 00:16:55.565 "adrfam": "IPv4", 00:16:55.565 "traddr": "10.0.0.2", 00:16:55.565 "trsvcid": "4420" 00:16:55.565 }, 00:16:55.565 "peer_address": { 00:16:55.565 "trtype": "TCP", 00:16:55.565 "adrfam": "IPv4", 00:16:55.565 "traddr": "10.0.0.1", 00:16:55.565 "trsvcid": "38962" 00:16:55.565 }, 00:16:55.565 "auth": { 00:16:55.565 "state": "completed", 00:16:55.565 "digest": "sha256", 00:16:55.565 "dhgroup": "ffdhe3072" 00:16:55.565 } 00:16:55.565 } 00:16:55.565 ]' 00:16:55.565 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.565 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.565 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.565 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:55.565 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.824 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.824 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.824 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.824 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:16:55.824 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:16:56.392 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.392 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:56.392 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.392 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.392 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.392 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.392 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:56.392 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:56.651 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:56.651 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.651 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.651 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:56.651 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:56.651 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.651 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.651 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.651 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.651 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.651 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.651 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.651 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.910 00:16:56.910 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.910 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.910 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.169 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.169 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.169 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.169 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.169 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.169 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.169 { 00:16:57.169 "cntlid": 21, 00:16:57.169 "qid": 0, 00:16:57.169 "state": "enabled", 00:16:57.169 "thread": "nvmf_tgt_poll_group_000", 00:16:57.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:57.169 "listen_address": { 00:16:57.169 "trtype": "TCP", 00:16:57.169 "adrfam": "IPv4", 00:16:57.169 "traddr": "10.0.0.2", 00:16:57.169 "trsvcid": "4420" 00:16:57.169 }, 00:16:57.169 "peer_address": { 00:16:57.169 "trtype": "TCP", 00:16:57.169 "adrfam": "IPv4", 00:16:57.169 "traddr": "10.0.0.1", 00:16:57.169 "trsvcid": "38994" 00:16:57.169 }, 00:16:57.169 "auth": { 00:16:57.169 "state": "completed", 00:16:57.169 "digest": "sha256", 00:16:57.169 "dhgroup": "ffdhe3072" 00:16:57.169 } 00:16:57.169 } 00:16:57.169 ]' 00:16:57.169 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.169 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.169 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.169 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:57.169 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.169 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.169 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.169 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.428 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:16:57.428 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:16:57.995 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.995 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:57.995 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.995 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.995 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.995 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.995 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:57.995 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:58.254 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:58.254 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.254 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.254 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:58.254 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.254 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.254 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:58.254 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.254 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.254 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.254 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.254 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.254 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.513 00:16:58.513 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.513 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.513 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.772 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.772 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.772 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.772 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.772 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.772 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.772 { 00:16:58.772 "cntlid": 23, 00:16:58.772 "qid": 0, 00:16:58.772 "state": "enabled", 00:16:58.772 "thread": "nvmf_tgt_poll_group_000", 00:16:58.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:58.772 "listen_address": { 00:16:58.772 "trtype": "TCP", 00:16:58.772 "adrfam": "IPv4", 00:16:58.772 "traddr": "10.0.0.2", 00:16:58.772 "trsvcid": "4420" 00:16:58.772 }, 00:16:58.772 "peer_address": { 00:16:58.772 "trtype": "TCP", 00:16:58.772 "adrfam": "IPv4", 00:16:58.772 "traddr": "10.0.0.1", 00:16:58.772 "trsvcid": "39022" 00:16:58.772 }, 00:16:58.772 "auth": { 00:16:58.772 "state": "completed", 00:16:58.772 "digest": "sha256", 00:16:58.772 "dhgroup": "ffdhe3072" 00:16:58.772 } 00:16:58.772 } 00:16:58.772 ]' 00:16:58.772 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.772 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.772 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.772 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:58.772 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.772 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.772 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.772 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.031 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:16:59.031 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:16:59.599 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.599 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:59.599 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.599 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.599 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.599 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.599 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.599 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.599 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.858 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:59.858 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.858 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.858 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:59.858 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:59.858 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.858 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.858 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.858 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.858 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.858 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.858 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.858 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.116 00:17:00.116 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.116 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.116 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.374 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.374 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.374 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.374 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.374 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.375 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.375 { 00:17:00.375 "cntlid": 25, 00:17:00.375 "qid": 0, 00:17:00.375 "state": "enabled", 00:17:00.375 "thread": "nvmf_tgt_poll_group_000", 00:17:00.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:00.375 "listen_address": { 00:17:00.375 "trtype": "TCP", 00:17:00.375 "adrfam": "IPv4", 00:17:00.375 "traddr": "10.0.0.2", 00:17:00.375 "trsvcid": "4420" 00:17:00.375 }, 00:17:00.375 "peer_address": { 00:17:00.375 "trtype": "TCP", 00:17:00.375 "adrfam": "IPv4", 00:17:00.375 "traddr": "10.0.0.1", 00:17:00.375 "trsvcid": "39054" 00:17:00.375 }, 00:17:00.375 "auth": { 00:17:00.375 "state": "completed", 00:17:00.375 "digest": "sha256", 00:17:00.375 "dhgroup": "ffdhe4096" 00:17:00.375 } 00:17:00.375 } 00:17:00.375 ]' 00:17:00.375 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.375 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.375 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.375 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:00.375 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.633 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.633 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.633 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.633 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:00.634 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:01.201 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.201 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:01.201 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.201 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.201 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.201 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.201 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.201 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.460 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:01.460 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.460 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.460 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:01.460 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:01.460 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.460 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.460 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.460 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.460 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.460 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.460 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.460 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.718 00:17:01.718 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.718 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.718 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.977 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.977 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.977 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.977 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.977 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.977 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.977 { 00:17:01.977 "cntlid": 27, 00:17:01.977 "qid": 0, 00:17:01.977 "state": "enabled", 00:17:01.977 "thread": "nvmf_tgt_poll_group_000", 00:17:01.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:01.977 "listen_address": { 00:17:01.977 "trtype": "TCP", 00:17:01.977 "adrfam": "IPv4", 00:17:01.977 "traddr": "10.0.0.2", 00:17:01.977 "trsvcid": "4420" 00:17:01.977 }, 00:17:01.977 "peer_address": { 00:17:01.977 "trtype": "TCP", 00:17:01.977 "adrfam": "IPv4", 00:17:01.977 "traddr": "10.0.0.1", 00:17:01.977 "trsvcid": "39082" 00:17:01.977 }, 00:17:01.977 "auth": { 00:17:01.977 "state": "completed", 00:17:01.977 "digest": "sha256", 00:17:01.977 "dhgroup": "ffdhe4096" 00:17:01.977 } 00:17:01.977 } 00:17:01.977 ]' 00:17:01.977 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.977 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.977 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.977 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:01.977 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.235 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.235 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.235 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.235 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:02.235 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:02.803 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.803 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:02.803 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.803 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.803 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.803 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.803 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:02.803 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:03.062 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:03.062 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.062 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.062 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:03.062 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:03.062 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.062 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.062 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.062 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.062 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.062 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.062 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.062 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.320 00:17:03.321 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.321 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.321 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.579 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.579 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.579 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.579 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.579 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.579 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.579 { 00:17:03.579 "cntlid": 29, 00:17:03.579 "qid": 0, 00:17:03.579 "state": "enabled", 00:17:03.579 "thread": "nvmf_tgt_poll_group_000", 00:17:03.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:03.579 "listen_address": { 00:17:03.579 "trtype": "TCP", 00:17:03.579 "adrfam": "IPv4", 00:17:03.579 "traddr": "10.0.0.2", 00:17:03.579 "trsvcid": "4420" 00:17:03.579 }, 00:17:03.579 "peer_address": { 00:17:03.579 "trtype": "TCP", 00:17:03.579 "adrfam": "IPv4", 00:17:03.579 "traddr": "10.0.0.1", 00:17:03.579 "trsvcid": "39122" 00:17:03.579 }, 00:17:03.579 "auth": { 00:17:03.579 "state": "completed", 00:17:03.579 "digest": "sha256", 00:17:03.579 "dhgroup": "ffdhe4096" 00:17:03.579 } 00:17:03.579 } 00:17:03.579 ]' 00:17:03.579 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.579 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.579 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.579 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:03.579 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.579 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.579 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.579 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.838 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:03.838 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:04.405 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.405 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:04.405 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.405 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.405 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.405 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.405 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:04.405 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:04.662 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:04.662 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.662 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:04.662 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:04.662 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:04.662 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.662 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:04.662 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.662 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.662 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.662 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:04.662 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.662 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.920 00:17:04.920 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.920 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.920 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.178 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.178 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.178 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.178 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.178 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.178 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.178 { 00:17:05.178 "cntlid": 31, 00:17:05.178 "qid": 0, 00:17:05.178 "state": "enabled", 00:17:05.178 "thread": "nvmf_tgt_poll_group_000", 00:17:05.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:05.178 "listen_address": { 00:17:05.178 "trtype": "TCP", 00:17:05.178 "adrfam": "IPv4", 00:17:05.178 "traddr": "10.0.0.2", 00:17:05.178 "trsvcid": "4420" 00:17:05.178 }, 00:17:05.178 "peer_address": { 00:17:05.178 "trtype": "TCP", 00:17:05.178 "adrfam": "IPv4", 00:17:05.178 "traddr": "10.0.0.1", 00:17:05.178 "trsvcid": "32784" 00:17:05.178 }, 00:17:05.178 "auth": { 00:17:05.178 "state": "completed", 00:17:05.178 "digest": "sha256", 00:17:05.178 "dhgroup": "ffdhe4096" 00:17:05.178 } 00:17:05.178 } 00:17:05.178 ]' 00:17:05.178 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.178 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.178 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.178 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:05.178 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.436 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.436 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.436 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.436 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:05.436 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:06.004 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.004 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:06.004 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.004 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.004 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.004 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.004 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.004 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:06.004 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:06.262 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:06.262 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.262 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:06.262 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:06.262 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:06.262 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.262 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.262 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.262 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.262 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.262 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.262 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.262 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.829 00:17:06.829 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.829 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.829 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.829 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.829 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.829 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.829 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.829 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.829 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.829 { 00:17:06.829 "cntlid": 33, 00:17:06.829 "qid": 0, 00:17:06.829 "state": "enabled", 00:17:06.829 "thread": "nvmf_tgt_poll_group_000", 00:17:06.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:06.829 "listen_address": { 00:17:06.829 "trtype": "TCP", 00:17:06.829 "adrfam": "IPv4", 00:17:06.829 "traddr": "10.0.0.2", 00:17:06.829 "trsvcid": "4420" 00:17:06.829 }, 00:17:06.829 "peer_address": { 00:17:06.829 "trtype": "TCP", 00:17:06.829 "adrfam": "IPv4", 00:17:06.829 "traddr": "10.0.0.1", 00:17:06.829 "trsvcid": "32810" 00:17:06.829 }, 00:17:06.829 "auth": { 00:17:06.829 "state": "completed", 00:17:06.829 "digest": "sha256", 00:17:06.829 "dhgroup": "ffdhe6144" 00:17:06.829 } 00:17:06.829 } 00:17:06.829 ]' 00:17:06.829 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.829 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.829 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.088 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:07.088 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.088 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.088 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.088 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.088 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:07.088 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:07.654 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.654 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:07.654 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.654 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.913 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.913 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.913 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:07.913 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:07.913 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:07.913 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.913 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:07.913 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:07.913 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:07.913 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.913 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.913 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.913 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.913 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.913 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.913 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.913 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.479 00:17:08.479 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.479 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.479 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.479 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.479 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.479 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.479 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.479 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.479 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.479 { 00:17:08.479 "cntlid": 35, 00:17:08.479 "qid": 0, 00:17:08.479 "state": "enabled", 00:17:08.479 "thread": "nvmf_tgt_poll_group_000", 00:17:08.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:08.479 "listen_address": { 00:17:08.479 "trtype": "TCP", 00:17:08.479 "adrfam": "IPv4", 00:17:08.479 "traddr": "10.0.0.2", 00:17:08.479 "trsvcid": "4420" 00:17:08.479 }, 00:17:08.479 "peer_address": { 00:17:08.479 "trtype": "TCP", 00:17:08.479 "adrfam": "IPv4", 00:17:08.479 "traddr": "10.0.0.1", 00:17:08.479 "trsvcid": "32842" 00:17:08.479 }, 00:17:08.479 "auth": { 00:17:08.479 "state": "completed", 00:17:08.479 "digest": "sha256", 00:17:08.479 "dhgroup": "ffdhe6144" 00:17:08.479 } 00:17:08.479 } 00:17:08.479 ]' 00:17:08.479 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.479 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.479 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.738 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:08.738 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.738 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.738 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.738 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.738 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:08.738 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:09.304 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.627 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.938 00:17:09.938 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.938 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.938 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.196 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.196 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.196 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.196 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.196 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.196 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.196 { 00:17:10.196 "cntlid": 37, 00:17:10.196 "qid": 0, 00:17:10.196 "state": "enabled", 00:17:10.196 "thread": "nvmf_tgt_poll_group_000", 00:17:10.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:10.196 "listen_address": { 00:17:10.196 "trtype": "TCP", 00:17:10.196 "adrfam": "IPv4", 00:17:10.196 "traddr": "10.0.0.2", 00:17:10.196 "trsvcid": "4420" 00:17:10.196 }, 00:17:10.196 "peer_address": { 00:17:10.196 "trtype": "TCP", 00:17:10.196 "adrfam": "IPv4", 00:17:10.196 "traddr": "10.0.0.1", 00:17:10.196 "trsvcid": "32872" 00:17:10.196 }, 00:17:10.196 "auth": { 00:17:10.196 "state": "completed", 00:17:10.196 "digest": "sha256", 00:17:10.196 "dhgroup": "ffdhe6144" 00:17:10.196 } 00:17:10.196 } 00:17:10.196 ]' 00:17:10.196 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.196 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.196 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.196 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:10.196 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.455 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.455 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.455 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.455 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:10.455 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:11.022 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.022 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:11.022 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.022 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.280 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.280 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.280 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.280 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.280 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:11.280 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.280 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:11.280 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:11.280 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:11.280 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.280 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:11.280 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.280 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.280 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.280 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.280 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.280 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.849 00:17:11.849 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.849 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.849 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.849 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.849 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.849 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.849 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.849 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.849 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.849 { 00:17:11.849 "cntlid": 39, 00:17:11.849 "qid": 0, 00:17:11.849 "state": "enabled", 00:17:11.849 "thread": "nvmf_tgt_poll_group_000", 00:17:11.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:11.849 "listen_address": { 00:17:11.849 "trtype": "TCP", 00:17:11.849 "adrfam": "IPv4", 00:17:11.849 "traddr": "10.0.0.2", 00:17:11.849 "trsvcid": "4420" 00:17:11.849 }, 00:17:11.849 "peer_address": { 00:17:11.849 "trtype": "TCP", 00:17:11.849 "adrfam": "IPv4", 00:17:11.849 "traddr": "10.0.0.1", 00:17:11.849 "trsvcid": "32904" 00:17:11.849 }, 00:17:11.849 "auth": { 00:17:11.849 "state": "completed", 00:17:11.849 "digest": "sha256", 00:17:11.849 "dhgroup": "ffdhe6144" 00:17:11.849 } 00:17:11.849 } 00:17:11.849 ]' 00:17:11.849 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.107 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.107 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.107 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:12.107 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.107 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.107 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.107 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.366 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:12.366 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:12.933 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.933 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:12.933 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.933 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.933 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.933 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.933 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.933 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:12.933 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.191 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:13.191 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.191 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:13.191 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:13.191 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:13.191 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.191 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.191 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.191 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.191 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.191 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.191 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.191 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.450 00:17:13.709 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.709 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.709 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.709 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.709 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.709 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.709 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.709 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.709 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.709 { 00:17:13.709 "cntlid": 41, 00:17:13.709 "qid": 0, 00:17:13.709 "state": "enabled", 00:17:13.709 "thread": "nvmf_tgt_poll_group_000", 00:17:13.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:13.709 "listen_address": { 00:17:13.709 "trtype": "TCP", 00:17:13.709 "adrfam": "IPv4", 00:17:13.709 "traddr": "10.0.0.2", 00:17:13.709 "trsvcid": "4420" 00:17:13.709 }, 00:17:13.709 "peer_address": { 00:17:13.709 "trtype": "TCP", 00:17:13.709 "adrfam": "IPv4", 00:17:13.709 "traddr": "10.0.0.1", 00:17:13.709 "trsvcid": "32934" 00:17:13.709 }, 00:17:13.709 "auth": { 00:17:13.709 "state": "completed", 00:17:13.709 "digest": "sha256", 00:17:13.709 "dhgroup": "ffdhe8192" 00:17:13.709 } 00:17:13.709 } 00:17:13.709 ]' 00:17:13.709 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.968 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.968 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.968 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.968 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.968 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.968 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.968 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.226 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:14.226 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.798 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.365 00:17:15.365 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.365 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.365 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.623 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.624 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.624 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.624 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.624 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.624 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.624 { 00:17:15.624 "cntlid": 43, 00:17:15.624 "qid": 0, 00:17:15.624 "state": "enabled", 00:17:15.624 "thread": "nvmf_tgt_poll_group_000", 00:17:15.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:15.624 "listen_address": { 00:17:15.624 "trtype": "TCP", 00:17:15.624 "adrfam": "IPv4", 00:17:15.624 "traddr": "10.0.0.2", 00:17:15.624 "trsvcid": "4420" 00:17:15.624 }, 00:17:15.624 "peer_address": { 00:17:15.624 "trtype": "TCP", 00:17:15.624 "adrfam": "IPv4", 00:17:15.624 "traddr": "10.0.0.1", 00:17:15.624 "trsvcid": "58890" 00:17:15.624 }, 00:17:15.624 "auth": { 00:17:15.624 "state": "completed", 00:17:15.624 "digest": "sha256", 00:17:15.624 "dhgroup": "ffdhe8192" 00:17:15.624 } 00:17:15.624 } 00:17:15.624 ]' 00:17:15.624 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.624 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.624 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.624 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:15.624 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.624 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.624 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.624 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.883 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:15.883 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:16.450 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.450 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:16.450 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.450 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.450 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.450 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.450 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:16.451 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:16.709 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:16.709 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.709 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:16.709 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:16.710 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:16.710 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.710 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.710 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.710 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.710 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.710 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.710 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.710 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.277 00:17:17.277 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.277 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.277 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.277 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.277 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.277 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.277 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.277 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.535 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.535 { 00:17:17.535 "cntlid": 45, 00:17:17.535 "qid": 0, 00:17:17.535 "state": "enabled", 00:17:17.535 "thread": "nvmf_tgt_poll_group_000", 00:17:17.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:17.535 "listen_address": { 00:17:17.535 "trtype": "TCP", 00:17:17.535 "adrfam": "IPv4", 00:17:17.535 "traddr": "10.0.0.2", 00:17:17.535 "trsvcid": "4420" 00:17:17.535 }, 00:17:17.535 "peer_address": { 00:17:17.535 "trtype": "TCP", 00:17:17.535 "adrfam": "IPv4", 00:17:17.535 "traddr": "10.0.0.1", 00:17:17.535 "trsvcid": "58924" 00:17:17.535 }, 00:17:17.535 "auth": { 00:17:17.535 "state": "completed", 00:17:17.535 "digest": "sha256", 00:17:17.535 "dhgroup": "ffdhe8192" 00:17:17.535 } 00:17:17.535 } 00:17:17.535 ]' 00:17:17.535 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.535 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.535 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.535 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.535 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.535 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.535 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.536 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.794 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:17.794 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:18.362 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.362 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:18.362 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.362 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.362 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.362 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.362 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.362 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:18.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:18.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:18.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:18.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:18.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:18.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.879 00:17:18.879 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.879 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.879 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.138 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.138 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.138 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.138 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.138 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.138 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.138 { 00:17:19.138 "cntlid": 47, 00:17:19.138 "qid": 0, 00:17:19.138 "state": "enabled", 00:17:19.138 "thread": "nvmf_tgt_poll_group_000", 00:17:19.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:19.138 "listen_address": { 00:17:19.138 "trtype": "TCP", 00:17:19.138 "adrfam": "IPv4", 00:17:19.138 "traddr": "10.0.0.2", 00:17:19.138 "trsvcid": "4420" 00:17:19.138 }, 00:17:19.138 "peer_address": { 00:17:19.138 "trtype": "TCP", 00:17:19.138 "adrfam": "IPv4", 00:17:19.138 "traddr": "10.0.0.1", 00:17:19.138 "trsvcid": "58948" 00:17:19.138 }, 00:17:19.138 "auth": { 00:17:19.138 "state": "completed", 00:17:19.138 "digest": "sha256", 00:17:19.138 "dhgroup": "ffdhe8192" 00:17:19.138 } 00:17:19.138 } 00:17:19.138 ]' 00:17:19.138 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.138 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.138 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.138 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.138 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.397 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.397 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.397 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.397 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:19.397 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:19.964 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.223 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.482 00:17:20.482 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.482 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.482 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.741 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.741 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.741 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.741 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.741 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.741 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.741 { 00:17:20.741 "cntlid": 49, 00:17:20.741 "qid": 0, 00:17:20.741 "state": "enabled", 00:17:20.741 "thread": "nvmf_tgt_poll_group_000", 00:17:20.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:20.741 "listen_address": { 00:17:20.741 "trtype": "TCP", 00:17:20.741 "adrfam": "IPv4", 00:17:20.741 "traddr": "10.0.0.2", 00:17:20.741 "trsvcid": "4420" 00:17:20.741 }, 00:17:20.741 "peer_address": { 00:17:20.741 "trtype": "TCP", 00:17:20.741 "adrfam": "IPv4", 00:17:20.741 "traddr": "10.0.0.1", 00:17:20.741 "trsvcid": "58972" 00:17:20.741 }, 00:17:20.741 "auth": { 00:17:20.741 "state": "completed", 00:17:20.741 "digest": "sha384", 00:17:20.741 "dhgroup": "null" 00:17:20.741 } 00:17:20.741 } 00:17:20.741 ]' 00:17:20.741 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.741 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.741 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.741 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:20.741 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.000 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.000 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.000 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.000 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:21.000 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:21.567 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.567 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:21.567 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.567 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.567 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.567 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.567 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:21.567 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:21.826 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:21.826 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.826 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.826 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:21.826 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:21.826 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.826 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.826 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.826 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.826 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.826 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.826 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.826 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.085 00:17:22.085 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.085 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.085 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.343 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.343 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.343 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.343 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.343 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.343 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.343 { 00:17:22.343 "cntlid": 51, 00:17:22.343 "qid": 0, 00:17:22.343 "state": "enabled", 00:17:22.343 "thread": "nvmf_tgt_poll_group_000", 00:17:22.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:22.343 "listen_address": { 00:17:22.343 "trtype": "TCP", 00:17:22.343 "adrfam": "IPv4", 00:17:22.343 "traddr": "10.0.0.2", 00:17:22.343 "trsvcid": "4420" 00:17:22.343 }, 00:17:22.343 "peer_address": { 00:17:22.343 "trtype": "TCP", 00:17:22.343 "adrfam": "IPv4", 00:17:22.343 "traddr": "10.0.0.1", 00:17:22.343 "trsvcid": "59008" 00:17:22.343 }, 00:17:22.343 "auth": { 00:17:22.343 "state": "completed", 00:17:22.343 "digest": "sha384", 00:17:22.343 "dhgroup": "null" 00:17:22.343 } 00:17:22.343 } 00:17:22.343 ]' 00:17:22.343 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.343 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.343 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.343 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:22.343 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.343 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.343 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.343 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.602 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:22.602 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:23.170 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.170 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:23.170 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.170 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.170 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.170 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.170 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:23.170 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:23.428 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:23.428 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.428 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.428 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:23.428 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:23.429 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.429 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.429 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.429 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.429 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.429 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.429 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.429 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.687 00:17:23.687 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.687 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.687 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.946 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.946 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.946 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.946 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.946 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.946 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.946 { 00:17:23.946 "cntlid": 53, 00:17:23.946 "qid": 0, 00:17:23.946 "state": "enabled", 00:17:23.946 "thread": "nvmf_tgt_poll_group_000", 00:17:23.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:23.946 "listen_address": { 00:17:23.946 "trtype": "TCP", 00:17:23.946 "adrfam": "IPv4", 00:17:23.946 "traddr": "10.0.0.2", 00:17:23.946 "trsvcid": "4420" 00:17:23.946 }, 00:17:23.946 "peer_address": { 00:17:23.946 "trtype": "TCP", 00:17:23.946 "adrfam": "IPv4", 00:17:23.946 "traddr": "10.0.0.1", 00:17:23.946 "trsvcid": "50402" 00:17:23.946 }, 00:17:23.946 "auth": { 00:17:23.946 "state": "completed", 00:17:23.946 "digest": "sha384", 00:17:23.946 "dhgroup": "null" 00:17:23.946 } 00:17:23.946 } 00:17:23.946 ]' 00:17:23.946 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.946 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.946 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.946 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:23.946 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.946 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.946 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.946 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.205 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:24.205 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:24.772 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.772 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:24.772 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.772 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.772 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.772 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.772 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:24.772 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.030 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:25.030 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.030 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.030 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:25.030 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:25.030 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.030 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:25.030 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.030 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.030 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.030 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:25.030 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.030 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.289 00:17:25.289 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.289 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.289 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.547 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.547 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.547 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.547 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.547 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.547 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.547 { 00:17:25.547 "cntlid": 55, 00:17:25.547 "qid": 0, 00:17:25.547 "state": "enabled", 00:17:25.547 "thread": "nvmf_tgt_poll_group_000", 00:17:25.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:25.547 "listen_address": { 00:17:25.547 "trtype": "TCP", 00:17:25.547 "adrfam": "IPv4", 00:17:25.547 "traddr": "10.0.0.2", 00:17:25.547 "trsvcid": "4420" 00:17:25.547 }, 00:17:25.547 "peer_address": { 00:17:25.547 "trtype": "TCP", 00:17:25.547 "adrfam": "IPv4", 00:17:25.547 "traddr": "10.0.0.1", 00:17:25.547 "trsvcid": "50444" 00:17:25.547 }, 00:17:25.547 "auth": { 00:17:25.547 "state": "completed", 00:17:25.547 "digest": "sha384", 00:17:25.547 "dhgroup": "null" 00:17:25.547 } 00:17:25.547 } 00:17:25.547 ]' 00:17:25.547 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.548 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.548 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.548 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:25.548 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.548 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.548 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.548 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.806 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:25.806 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:26.373 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.373 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:26.373 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.373 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.373 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.373 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.373 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.373 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:26.373 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:26.632 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:26.632 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.632 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.632 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:26.632 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:26.632 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.632 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.632 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.632 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.632 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.632 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.632 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.632 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.890 00:17:26.890 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.890 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.890 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.149 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.149 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.149 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.149 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.149 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.149 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.149 { 00:17:27.149 "cntlid": 57, 00:17:27.149 "qid": 0, 00:17:27.149 "state": "enabled", 00:17:27.149 "thread": "nvmf_tgt_poll_group_000", 00:17:27.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:27.149 "listen_address": { 00:17:27.149 "trtype": "TCP", 00:17:27.149 "adrfam": "IPv4", 00:17:27.149 "traddr": "10.0.0.2", 00:17:27.149 "trsvcid": "4420" 00:17:27.149 }, 00:17:27.149 "peer_address": { 00:17:27.149 "trtype": "TCP", 00:17:27.149 "adrfam": "IPv4", 00:17:27.149 "traddr": "10.0.0.1", 00:17:27.149 "trsvcid": "50482" 00:17:27.149 }, 00:17:27.149 "auth": { 00:17:27.149 "state": "completed", 00:17:27.149 "digest": "sha384", 00:17:27.149 "dhgroup": "ffdhe2048" 00:17:27.149 } 00:17:27.149 } 00:17:27.149 ]' 00:17:27.149 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.149 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.149 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.149 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:27.149 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.149 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.149 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.149 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.408 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:27.408 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:27.976 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.976 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:27.976 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.976 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.976 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.976 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.976 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:27.976 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:28.235 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:28.235 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.235 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.235 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:28.235 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:28.235 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.235 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.235 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.235 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.235 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.235 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.235 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.235 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.493 00:17:28.493 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.493 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.493 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.493 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.493 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.493 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.493 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.493 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.493 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.493 { 00:17:28.493 "cntlid": 59, 00:17:28.493 "qid": 0, 00:17:28.493 "state": "enabled", 00:17:28.493 "thread": "nvmf_tgt_poll_group_000", 00:17:28.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:28.493 "listen_address": { 00:17:28.493 "trtype": "TCP", 00:17:28.493 "adrfam": "IPv4", 00:17:28.493 "traddr": "10.0.0.2", 00:17:28.493 "trsvcid": "4420" 00:17:28.493 }, 00:17:28.493 "peer_address": { 00:17:28.493 "trtype": "TCP", 00:17:28.493 "adrfam": "IPv4", 00:17:28.493 "traddr": "10.0.0.1", 00:17:28.493 "trsvcid": "50516" 00:17:28.493 }, 00:17:28.493 "auth": { 00:17:28.493 "state": "completed", 00:17:28.493 "digest": "sha384", 00:17:28.493 "dhgroup": "ffdhe2048" 00:17:28.493 } 00:17:28.493 } 00:17:28.493 ]' 00:17:28.493 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.752 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.752 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.752 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:28.752 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.752 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.752 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.752 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.011 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:29.011 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:29.578 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.578 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:29.578 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.578 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.578 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.578 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.578 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:29.578 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:29.578 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:29.578 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.578 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.578 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:29.578 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:29.578 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.578 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.578 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.578 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.837 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.837 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.837 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.837 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.837 00:17:30.096 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.096 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.096 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.096 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.096 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.096 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.096 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.096 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.096 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.096 { 00:17:30.096 "cntlid": 61, 00:17:30.096 "qid": 0, 00:17:30.096 "state": "enabled", 00:17:30.096 "thread": "nvmf_tgt_poll_group_000", 00:17:30.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:30.096 "listen_address": { 00:17:30.096 "trtype": "TCP", 00:17:30.096 "adrfam": "IPv4", 00:17:30.096 "traddr": "10.0.0.2", 00:17:30.096 "trsvcid": "4420" 00:17:30.096 }, 00:17:30.096 "peer_address": { 00:17:30.096 "trtype": "TCP", 00:17:30.096 "adrfam": "IPv4", 00:17:30.096 "traddr": "10.0.0.1", 00:17:30.096 "trsvcid": "50548" 00:17:30.096 }, 00:17:30.096 "auth": { 00:17:30.096 "state": "completed", 00:17:30.096 "digest": "sha384", 00:17:30.096 "dhgroup": "ffdhe2048" 00:17:30.096 } 00:17:30.096 } 00:17:30.096 ]' 00:17:30.097 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.355 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.355 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.355 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.355 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.355 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.355 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.355 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.614 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:30.614 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:31.181 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.181 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:31.181 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.181 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.181 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.181 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.181 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:31.181 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:31.181 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:31.181 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.181 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.181 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:31.181 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:31.181 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.181 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:31.181 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.181 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.440 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.440 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:31.440 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.440 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.440 00:17:31.440 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.698 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.698 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.698 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.698 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.698 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.698 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.698 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.698 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.698 { 00:17:31.698 "cntlid": 63, 00:17:31.698 "qid": 0, 00:17:31.698 "state": "enabled", 00:17:31.698 "thread": "nvmf_tgt_poll_group_000", 00:17:31.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:31.698 "listen_address": { 00:17:31.698 "trtype": "TCP", 00:17:31.698 "adrfam": "IPv4", 00:17:31.698 "traddr": "10.0.0.2", 00:17:31.698 "trsvcid": "4420" 00:17:31.698 }, 00:17:31.698 "peer_address": { 00:17:31.698 "trtype": "TCP", 00:17:31.698 "adrfam": "IPv4", 00:17:31.698 "traddr": "10.0.0.1", 00:17:31.698 "trsvcid": "50574" 00:17:31.698 }, 00:17:31.698 "auth": { 00:17:31.698 "state": "completed", 00:17:31.698 "digest": "sha384", 00:17:31.698 "dhgroup": "ffdhe2048" 00:17:31.698 } 00:17:31.698 } 00:17:31.698 ]' 00:17:31.698 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.699 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.699 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.957 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:31.957 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.957 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.957 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.957 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.215 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:32.215 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.782 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.041 00:17:33.041 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.041 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.041 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.299 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.299 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.299 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.299 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.299 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.299 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.299 { 00:17:33.299 "cntlid": 65, 00:17:33.299 "qid": 0, 00:17:33.299 "state": "enabled", 00:17:33.299 "thread": "nvmf_tgt_poll_group_000", 00:17:33.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:33.299 "listen_address": { 00:17:33.299 "trtype": "TCP", 00:17:33.299 "adrfam": "IPv4", 00:17:33.299 "traddr": "10.0.0.2", 00:17:33.299 "trsvcid": "4420" 00:17:33.299 }, 00:17:33.299 "peer_address": { 00:17:33.299 "trtype": "TCP", 00:17:33.299 "adrfam": "IPv4", 00:17:33.299 "traddr": "10.0.0.1", 00:17:33.299 "trsvcid": "50600" 00:17:33.299 }, 00:17:33.299 "auth": { 00:17:33.299 "state": "completed", 00:17:33.299 "digest": "sha384", 00:17:33.299 "dhgroup": "ffdhe3072" 00:17:33.299 } 00:17:33.299 } 00:17:33.299 ]' 00:17:33.299 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.299 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.299 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.558 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:33.558 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.558 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.558 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.558 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.816 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:33.816 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:34.383 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.383 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:34.383 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.383 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.383 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.383 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.383 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:34.383 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:34.383 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:34.383 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.383 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.383 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:34.383 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:34.383 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.383 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.383 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.383 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.383 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.383 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.383 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.383 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.641 00:17:34.641 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.641 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.641 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.899 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.899 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.899 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.899 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.900 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.900 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.900 { 00:17:34.900 "cntlid": 67, 00:17:34.900 "qid": 0, 00:17:34.900 "state": "enabled", 00:17:34.900 "thread": "nvmf_tgt_poll_group_000", 00:17:34.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:34.900 "listen_address": { 00:17:34.900 "trtype": "TCP", 00:17:34.900 "adrfam": "IPv4", 00:17:34.900 "traddr": "10.0.0.2", 00:17:34.900 "trsvcid": "4420" 00:17:34.900 }, 00:17:34.900 "peer_address": { 00:17:34.900 "trtype": "TCP", 00:17:34.900 "adrfam": "IPv4", 00:17:34.900 "traddr": "10.0.0.1", 00:17:34.900 "trsvcid": "32938" 00:17:34.900 }, 00:17:34.900 "auth": { 00:17:34.900 "state": "completed", 00:17:34.900 "digest": "sha384", 00:17:34.900 "dhgroup": "ffdhe3072" 00:17:34.900 } 00:17:34.900 } 00:17:34.900 ]' 00:17:34.900 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.900 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.900 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.158 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.158 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.158 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.158 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.158 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.417 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:35.417 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.984 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.243 00:17:36.243 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.243 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.243 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.501 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.501 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.501 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.501 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.501 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.501 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.501 { 00:17:36.501 "cntlid": 69, 00:17:36.501 "qid": 0, 00:17:36.501 "state": "enabled", 00:17:36.501 "thread": "nvmf_tgt_poll_group_000", 00:17:36.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:36.501 "listen_address": { 00:17:36.501 "trtype": "TCP", 00:17:36.501 "adrfam": "IPv4", 00:17:36.501 "traddr": "10.0.0.2", 00:17:36.501 "trsvcid": "4420" 00:17:36.501 }, 00:17:36.501 "peer_address": { 00:17:36.501 "trtype": "TCP", 00:17:36.501 "adrfam": "IPv4", 00:17:36.501 "traddr": "10.0.0.1", 00:17:36.501 "trsvcid": "32970" 00:17:36.501 }, 00:17:36.501 "auth": { 00:17:36.501 "state": "completed", 00:17:36.501 "digest": "sha384", 00:17:36.501 "dhgroup": "ffdhe3072" 00:17:36.501 } 00:17:36.501 } 00:17:36.501 ]' 00:17:36.501 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.501 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.501 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.501 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:36.501 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.758 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.758 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.758 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.758 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:36.758 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:37.323 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.582 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.840 00:17:37.840 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.840 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.840 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.098 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.098 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.098 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.098 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.098 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.098 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.098 { 00:17:38.098 "cntlid": 71, 00:17:38.098 "qid": 0, 00:17:38.098 "state": "enabled", 00:17:38.098 "thread": "nvmf_tgt_poll_group_000", 00:17:38.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:38.098 "listen_address": { 00:17:38.098 "trtype": "TCP", 00:17:38.098 "adrfam": "IPv4", 00:17:38.098 "traddr": "10.0.0.2", 00:17:38.098 "trsvcid": "4420" 00:17:38.098 }, 00:17:38.098 "peer_address": { 00:17:38.098 "trtype": "TCP", 00:17:38.098 "adrfam": "IPv4", 00:17:38.098 "traddr": "10.0.0.1", 00:17:38.098 "trsvcid": "33000" 00:17:38.098 }, 00:17:38.098 "auth": { 00:17:38.098 "state": "completed", 00:17:38.098 "digest": "sha384", 00:17:38.098 "dhgroup": "ffdhe3072" 00:17:38.098 } 00:17:38.098 } 00:17:38.098 ]' 00:17:38.098 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.099 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.099 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.356 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.356 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.356 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.356 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.356 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.356 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:38.356 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:38.922 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.922 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:38.922 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.922 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.922 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.922 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.922 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.922 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.922 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:39.181 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:39.181 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.181 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.181 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:39.181 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:39.181 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.181 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.181 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.181 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.181 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.181 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.181 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.181 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.439 00:17:39.439 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.439 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.439 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.697 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.697 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.697 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.697 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.697 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.697 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.697 { 00:17:39.697 "cntlid": 73, 00:17:39.697 "qid": 0, 00:17:39.697 "state": "enabled", 00:17:39.697 "thread": "nvmf_tgt_poll_group_000", 00:17:39.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:39.697 "listen_address": { 00:17:39.697 "trtype": "TCP", 00:17:39.697 "adrfam": "IPv4", 00:17:39.697 "traddr": "10.0.0.2", 00:17:39.697 "trsvcid": "4420" 00:17:39.697 }, 00:17:39.697 "peer_address": { 00:17:39.697 "trtype": "TCP", 00:17:39.697 "adrfam": "IPv4", 00:17:39.697 "traddr": "10.0.0.1", 00:17:39.697 "trsvcid": "33034" 00:17:39.697 }, 00:17:39.697 "auth": { 00:17:39.697 "state": "completed", 00:17:39.697 "digest": "sha384", 00:17:39.697 "dhgroup": "ffdhe4096" 00:17:39.697 } 00:17:39.697 } 00:17:39.697 ]' 00:17:39.697 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.697 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.697 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.955 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:39.955 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.955 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.955 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.955 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.212 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:40.212 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.779 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.037 00:17:41.295 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.295 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.295 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.295 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.295 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.295 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.295 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.295 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.295 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.295 { 00:17:41.295 "cntlid": 75, 00:17:41.295 "qid": 0, 00:17:41.295 "state": "enabled", 00:17:41.295 "thread": "nvmf_tgt_poll_group_000", 00:17:41.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:41.295 "listen_address": { 00:17:41.295 "trtype": "TCP", 00:17:41.296 "adrfam": "IPv4", 00:17:41.296 "traddr": "10.0.0.2", 00:17:41.296 "trsvcid": "4420" 00:17:41.296 }, 00:17:41.296 "peer_address": { 00:17:41.296 "trtype": "TCP", 00:17:41.296 "adrfam": "IPv4", 00:17:41.296 "traddr": "10.0.0.1", 00:17:41.296 "trsvcid": "33054" 00:17:41.296 }, 00:17:41.296 "auth": { 00:17:41.296 "state": "completed", 00:17:41.296 "digest": "sha384", 00:17:41.296 "dhgroup": "ffdhe4096" 00:17:41.296 } 00:17:41.296 } 00:17:41.296 ]' 00:17:41.296 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.554 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.554 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.554 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:41.554 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.554 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.554 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.554 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.812 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:41.812 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:42.378 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.378 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:42.378 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.378 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.378 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.378 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.378 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:42.378 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:42.637 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:42.637 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.637 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.637 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:42.637 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:42.637 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.637 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.637 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.637 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.637 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.637 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.637 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.637 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.895 00:17:42.895 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.895 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.895 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.895 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.895 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.895 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.895 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.895 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.895 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.895 { 00:17:42.895 "cntlid": 77, 00:17:42.895 "qid": 0, 00:17:42.895 "state": "enabled", 00:17:42.895 "thread": "nvmf_tgt_poll_group_000", 00:17:42.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:42.895 "listen_address": { 00:17:42.895 "trtype": "TCP", 00:17:42.895 "adrfam": "IPv4", 00:17:42.895 "traddr": "10.0.0.2", 00:17:42.895 "trsvcid": "4420" 00:17:42.895 }, 00:17:42.895 "peer_address": { 00:17:42.895 "trtype": "TCP", 00:17:42.895 "adrfam": "IPv4", 00:17:42.895 "traddr": "10.0.0.1", 00:17:42.895 "trsvcid": "33076" 00:17:42.895 }, 00:17:42.895 "auth": { 00:17:42.895 "state": "completed", 00:17:42.895 "digest": "sha384", 00:17:42.895 "dhgroup": "ffdhe4096" 00:17:42.895 } 00:17:42.895 } 00:17:42.895 ]' 00:17:42.895 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.154 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.154 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.154 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.154 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.154 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.154 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.154 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.412 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:43.412 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:43.979 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.979 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:43.979 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.979 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.979 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.979 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.979 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:43.979 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:43.979 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:43.979 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.979 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:43.979 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:43.979 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:43.979 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.979 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:43.979 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.979 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.237 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.237 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:44.237 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.237 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.495 00:17:44.495 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.495 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.495 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.495 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.495 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.495 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.495 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.495 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.495 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.495 { 00:17:44.495 "cntlid": 79, 00:17:44.495 "qid": 0, 00:17:44.495 "state": "enabled", 00:17:44.495 "thread": "nvmf_tgt_poll_group_000", 00:17:44.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:44.495 "listen_address": { 00:17:44.495 "trtype": "TCP", 00:17:44.495 "adrfam": "IPv4", 00:17:44.495 "traddr": "10.0.0.2", 00:17:44.495 "trsvcid": "4420" 00:17:44.495 }, 00:17:44.495 "peer_address": { 00:17:44.495 "trtype": "TCP", 00:17:44.495 "adrfam": "IPv4", 00:17:44.495 "traddr": "10.0.0.1", 00:17:44.495 "trsvcid": "35780" 00:17:44.495 }, 00:17:44.495 "auth": { 00:17:44.495 "state": "completed", 00:17:44.495 "digest": "sha384", 00:17:44.495 "dhgroup": "ffdhe4096" 00:17:44.495 } 00:17:44.495 } 00:17:44.495 ]' 00:17:44.495 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.752 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.752 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.752 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:44.752 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.752 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.752 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.752 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.010 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:45.010 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:45.576 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.576 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:45.576 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.576 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.576 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.576 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.576 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.576 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:45.576 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:45.576 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:45.577 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.577 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.577 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:45.835 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:45.835 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.835 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.835 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.835 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.835 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.835 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.835 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.835 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.093 00:17:46.093 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.093 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.093 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.351 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.351 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.351 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.351 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.351 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.351 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.351 { 00:17:46.351 "cntlid": 81, 00:17:46.351 "qid": 0, 00:17:46.351 "state": "enabled", 00:17:46.351 "thread": "nvmf_tgt_poll_group_000", 00:17:46.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:46.351 "listen_address": { 00:17:46.351 "trtype": "TCP", 00:17:46.351 "adrfam": "IPv4", 00:17:46.351 "traddr": "10.0.0.2", 00:17:46.351 "trsvcid": "4420" 00:17:46.351 }, 00:17:46.351 "peer_address": { 00:17:46.351 "trtype": "TCP", 00:17:46.351 "adrfam": "IPv4", 00:17:46.351 "traddr": "10.0.0.1", 00:17:46.351 "trsvcid": "35812" 00:17:46.351 }, 00:17:46.351 "auth": { 00:17:46.351 "state": "completed", 00:17:46.351 "digest": "sha384", 00:17:46.351 "dhgroup": "ffdhe6144" 00:17:46.351 } 00:17:46.351 } 00:17:46.351 ]' 00:17:46.351 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.351 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.351 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.351 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:46.351 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.351 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.351 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.351 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.610 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:46.610 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:47.214 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.214 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:47.214 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.214 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.214 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.214 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.214 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:47.214 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:47.527 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:47.527 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.527 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:47.527 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:47.527 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:47.527 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.527 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.527 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.527 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.527 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.527 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.527 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.527 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.785 00:17:47.785 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.785 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.785 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.044 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.044 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.044 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.044 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.044 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.044 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.044 { 00:17:48.044 "cntlid": 83, 00:17:48.044 "qid": 0, 00:17:48.044 "state": "enabled", 00:17:48.044 "thread": "nvmf_tgt_poll_group_000", 00:17:48.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:48.044 "listen_address": { 00:17:48.044 "trtype": "TCP", 00:17:48.044 "adrfam": "IPv4", 00:17:48.044 "traddr": "10.0.0.2", 00:17:48.044 "trsvcid": "4420" 00:17:48.044 }, 00:17:48.044 "peer_address": { 00:17:48.044 "trtype": "TCP", 00:17:48.044 "adrfam": "IPv4", 00:17:48.044 "traddr": "10.0.0.1", 00:17:48.044 "trsvcid": "35844" 00:17:48.044 }, 00:17:48.044 "auth": { 00:17:48.044 "state": "completed", 00:17:48.044 "digest": "sha384", 00:17:48.044 "dhgroup": "ffdhe6144" 00:17:48.044 } 00:17:48.044 } 00:17:48.044 ]' 00:17:48.044 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.044 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.044 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.044 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:48.044 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.044 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.044 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.044 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.303 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:48.303 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:48.868 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.868 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:48.868 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.868 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.868 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.868 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.868 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:48.868 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:49.126 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:49.126 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.126 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:49.126 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:49.126 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:49.126 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.126 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.126 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.126 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.126 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.126 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.126 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.126 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.384 00:17:49.385 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.385 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.385 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.643 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.643 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.643 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.643 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.643 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.643 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.643 { 00:17:49.643 "cntlid": 85, 00:17:49.643 "qid": 0, 00:17:49.643 "state": "enabled", 00:17:49.643 "thread": "nvmf_tgt_poll_group_000", 00:17:49.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:49.643 "listen_address": { 00:17:49.643 "trtype": "TCP", 00:17:49.643 "adrfam": "IPv4", 00:17:49.643 "traddr": "10.0.0.2", 00:17:49.643 "trsvcid": "4420" 00:17:49.643 }, 00:17:49.643 "peer_address": { 00:17:49.643 "trtype": "TCP", 00:17:49.643 "adrfam": "IPv4", 00:17:49.643 "traddr": "10.0.0.1", 00:17:49.643 "trsvcid": "35868" 00:17:49.643 }, 00:17:49.643 "auth": { 00:17:49.643 "state": "completed", 00:17:49.643 "digest": "sha384", 00:17:49.643 "dhgroup": "ffdhe6144" 00:17:49.643 } 00:17:49.643 } 00:17:49.643 ]' 00:17:49.643 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.643 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.643 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.643 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.901 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.901 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.901 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.901 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.901 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:49.901 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:50.467 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.726 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.292 00:17:51.292 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.292 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.292 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.292 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.292 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.292 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.292 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.292 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.292 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.292 { 00:17:51.292 "cntlid": 87, 00:17:51.292 "qid": 0, 00:17:51.292 "state": "enabled", 00:17:51.292 "thread": "nvmf_tgt_poll_group_000", 00:17:51.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:51.292 "listen_address": { 00:17:51.292 "trtype": "TCP", 00:17:51.292 "adrfam": "IPv4", 00:17:51.292 "traddr": "10.0.0.2", 00:17:51.292 "trsvcid": "4420" 00:17:51.292 }, 00:17:51.292 "peer_address": { 00:17:51.292 "trtype": "TCP", 00:17:51.292 "adrfam": "IPv4", 00:17:51.292 "traddr": "10.0.0.1", 00:17:51.292 "trsvcid": "35876" 00:17:51.292 }, 00:17:51.292 "auth": { 00:17:51.292 "state": "completed", 00:17:51.292 "digest": "sha384", 00:17:51.292 "dhgroup": "ffdhe6144" 00:17:51.292 } 00:17:51.292 } 00:17:51.292 ]' 00:17:51.292 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.292 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.551 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.551 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:51.551 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.551 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.551 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.551 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.809 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:51.809 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:52.376 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.376 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:52.376 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.376 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.376 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.376 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.376 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.376 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:52.376 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:52.376 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:52.376 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.376 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:52.376 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:52.376 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:52.376 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.376 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.376 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.376 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.376 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.376 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.376 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.376 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.943 00:17:52.943 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.943 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.943 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.202 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.202 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.202 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.202 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.202 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.202 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.202 { 00:17:53.202 "cntlid": 89, 00:17:53.202 "qid": 0, 00:17:53.202 "state": "enabled", 00:17:53.202 "thread": "nvmf_tgt_poll_group_000", 00:17:53.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:53.202 "listen_address": { 00:17:53.202 "trtype": "TCP", 00:17:53.202 "adrfam": "IPv4", 00:17:53.202 "traddr": "10.0.0.2", 00:17:53.202 "trsvcid": "4420" 00:17:53.202 }, 00:17:53.202 "peer_address": { 00:17:53.202 "trtype": "TCP", 00:17:53.202 "adrfam": "IPv4", 00:17:53.202 "traddr": "10.0.0.1", 00:17:53.202 "trsvcid": "35896" 00:17:53.202 }, 00:17:53.202 "auth": { 00:17:53.202 "state": "completed", 00:17:53.202 "digest": "sha384", 00:17:53.202 "dhgroup": "ffdhe8192" 00:17:53.202 } 00:17:53.202 } 00:17:53.202 ]' 00:17:53.202 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.202 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.202 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.202 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:53.202 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.202 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.202 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.202 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.461 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:53.461 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:17:54.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:54.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:54.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:54.287 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:54.287 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.287 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:54.287 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:54.287 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:54.287 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.287 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.287 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.287 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.287 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.287 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.287 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.287 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.859 00:17:54.859 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.859 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.859 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.859 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.859 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.859 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.859 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.859 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.859 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.859 { 00:17:54.859 "cntlid": 91, 00:17:54.859 "qid": 0, 00:17:54.859 "state": "enabled", 00:17:54.859 "thread": "nvmf_tgt_poll_group_000", 00:17:54.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:54.859 "listen_address": { 00:17:54.859 "trtype": "TCP", 00:17:54.859 "adrfam": "IPv4", 00:17:54.859 "traddr": "10.0.0.2", 00:17:54.859 "trsvcid": "4420" 00:17:54.859 }, 00:17:54.859 "peer_address": { 00:17:54.859 "trtype": "TCP", 00:17:54.859 "adrfam": "IPv4", 00:17:54.859 "traddr": "10.0.0.1", 00:17:54.859 "trsvcid": "41514" 00:17:54.859 }, 00:17:54.860 "auth": { 00:17:54.860 "state": "completed", 00:17:54.860 "digest": "sha384", 00:17:54.860 "dhgroup": "ffdhe8192" 00:17:54.860 } 00:17:54.860 } 00:17:54.860 ]' 00:17:54.860 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.118 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:55.118 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.118 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:55.118 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.118 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.118 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.118 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.376 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:55.376 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:17:55.944 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.944 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:55.944 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.944 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.944 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.944 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.944 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:55.944 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:55.944 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:55.944 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.944 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:55.944 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:55.944 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:55.944 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.944 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.944 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.944 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.202 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.202 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.202 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.202 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.461 00:17:56.461 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.461 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.461 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.719 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.719 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.719 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.719 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.719 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.719 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.719 { 00:17:56.719 "cntlid": 93, 00:17:56.719 "qid": 0, 00:17:56.719 "state": "enabled", 00:17:56.719 "thread": "nvmf_tgt_poll_group_000", 00:17:56.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:56.719 "listen_address": { 00:17:56.719 "trtype": "TCP", 00:17:56.719 "adrfam": "IPv4", 00:17:56.720 "traddr": "10.0.0.2", 00:17:56.720 "trsvcid": "4420" 00:17:56.720 }, 00:17:56.720 "peer_address": { 00:17:56.720 "trtype": "TCP", 00:17:56.720 "adrfam": "IPv4", 00:17:56.720 "traddr": "10.0.0.1", 00:17:56.720 "trsvcid": "41538" 00:17:56.720 }, 00:17:56.720 "auth": { 00:17:56.720 "state": "completed", 00:17:56.720 "digest": "sha384", 00:17:56.720 "dhgroup": "ffdhe8192" 00:17:56.720 } 00:17:56.720 } 00:17:56.720 ]' 00:17:56.720 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.720 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.720 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.720 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:56.720 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.978 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.978 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.978 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.978 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:56.978 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:17:57.546 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.546 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:57.546 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.546 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.805 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.806 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.806 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:57.806 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:57.806 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:57.806 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.806 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:57.806 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:57.806 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:57.806 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.806 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:57.806 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.806 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.806 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.806 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:57.806 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.806 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.373 00:17:58.373 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.373 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.373 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.631 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.631 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.631 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.631 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.631 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.631 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.631 { 00:17:58.631 "cntlid": 95, 00:17:58.631 "qid": 0, 00:17:58.631 "state": "enabled", 00:17:58.631 "thread": "nvmf_tgt_poll_group_000", 00:17:58.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:58.631 "listen_address": { 00:17:58.631 "trtype": "TCP", 00:17:58.631 "adrfam": "IPv4", 00:17:58.631 "traddr": "10.0.0.2", 00:17:58.631 "trsvcid": "4420" 00:17:58.631 }, 00:17:58.631 "peer_address": { 00:17:58.631 "trtype": "TCP", 00:17:58.632 "adrfam": "IPv4", 00:17:58.632 "traddr": "10.0.0.1", 00:17:58.632 "trsvcid": "41558" 00:17:58.632 }, 00:17:58.632 "auth": { 00:17:58.632 "state": "completed", 00:17:58.632 "digest": "sha384", 00:17:58.632 "dhgroup": "ffdhe8192" 00:17:58.632 } 00:17:58.632 } 00:17:58.632 ]' 00:17:58.632 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.632 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.632 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.632 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:58.632 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.632 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.632 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.632 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.890 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:58.890 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:17:59.458 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.458 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:59.458 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.458 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.458 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.458 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:59.458 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.458 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.458 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:59.458 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:59.716 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:59.716 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.716 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.716 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:59.716 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:59.716 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.716 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.716 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.716 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.716 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.717 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.717 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.717 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.975 00:17:59.975 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.975 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.975 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.234 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.234 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.234 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.234 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.234 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.234 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.234 { 00:18:00.234 "cntlid": 97, 00:18:00.234 "qid": 0, 00:18:00.234 "state": "enabled", 00:18:00.234 "thread": "nvmf_tgt_poll_group_000", 00:18:00.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:00.234 "listen_address": { 00:18:00.234 "trtype": "TCP", 00:18:00.234 "adrfam": "IPv4", 00:18:00.234 "traddr": "10.0.0.2", 00:18:00.234 "trsvcid": "4420" 00:18:00.234 }, 00:18:00.234 "peer_address": { 00:18:00.234 "trtype": "TCP", 00:18:00.234 "adrfam": "IPv4", 00:18:00.234 "traddr": "10.0.0.1", 00:18:00.234 "trsvcid": "41578" 00:18:00.234 }, 00:18:00.234 "auth": { 00:18:00.234 "state": "completed", 00:18:00.234 "digest": "sha512", 00:18:00.234 "dhgroup": "null" 00:18:00.234 } 00:18:00.234 } 00:18:00.234 ]' 00:18:00.234 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.234 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.234 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.234 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:00.234 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.234 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.234 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.234 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.493 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:18:00.493 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:18:01.059 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.059 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:01.059 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.059 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.059 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.059 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.059 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:01.059 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:01.316 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:01.316 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.316 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.316 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:01.316 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:01.316 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.316 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.316 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.316 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.316 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.316 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.316 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.316 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.575 00:18:01.575 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.575 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.575 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.833 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.833 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.833 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.833 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.833 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.833 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.833 { 00:18:01.833 "cntlid": 99, 00:18:01.833 "qid": 0, 00:18:01.833 "state": "enabled", 00:18:01.833 "thread": "nvmf_tgt_poll_group_000", 00:18:01.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:01.833 "listen_address": { 00:18:01.833 "trtype": "TCP", 00:18:01.833 "adrfam": "IPv4", 00:18:01.833 "traddr": "10.0.0.2", 00:18:01.833 "trsvcid": "4420" 00:18:01.833 }, 00:18:01.833 "peer_address": { 00:18:01.833 "trtype": "TCP", 00:18:01.833 "adrfam": "IPv4", 00:18:01.833 "traddr": "10.0.0.1", 00:18:01.833 "trsvcid": "41606" 00:18:01.833 }, 00:18:01.833 "auth": { 00:18:01.833 "state": "completed", 00:18:01.833 "digest": "sha512", 00:18:01.833 "dhgroup": "null" 00:18:01.833 } 00:18:01.833 } 00:18:01.833 ]' 00:18:01.833 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.833 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.833 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.833 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:01.833 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.833 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.833 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.833 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.092 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:18:02.092 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:18:02.660 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.660 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:02.660 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.660 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.660 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.660 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.660 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:02.660 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:02.919 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:02.919 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.919 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.919 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:02.919 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:02.919 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.919 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.919 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.919 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.919 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.919 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.920 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.920 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.178 00:18:03.178 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.178 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.178 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.178 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.178 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.178 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.178 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.178 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.178 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.178 { 00:18:03.178 "cntlid": 101, 00:18:03.178 "qid": 0, 00:18:03.178 "state": "enabled", 00:18:03.178 "thread": "nvmf_tgt_poll_group_000", 00:18:03.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:03.178 "listen_address": { 00:18:03.178 "trtype": "TCP", 00:18:03.178 "adrfam": "IPv4", 00:18:03.178 "traddr": "10.0.0.2", 00:18:03.178 "trsvcid": "4420" 00:18:03.178 }, 00:18:03.178 "peer_address": { 00:18:03.178 "trtype": "TCP", 00:18:03.178 "adrfam": "IPv4", 00:18:03.178 "traddr": "10.0.0.1", 00:18:03.178 "trsvcid": "41648" 00:18:03.178 }, 00:18:03.178 "auth": { 00:18:03.178 "state": "completed", 00:18:03.178 "digest": "sha512", 00:18:03.178 "dhgroup": "null" 00:18:03.178 } 00:18:03.179 } 00:18:03.179 ]' 00:18:03.179 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.437 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.437 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.437 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:03.437 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.437 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.437 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.437 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.696 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:18:03.696 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.264 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.523 00:18:04.523 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.523 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.523 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.782 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.782 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.782 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.782 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.782 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.782 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.782 { 00:18:04.782 "cntlid": 103, 00:18:04.782 "qid": 0, 00:18:04.782 "state": "enabled", 00:18:04.782 "thread": "nvmf_tgt_poll_group_000", 00:18:04.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:04.782 "listen_address": { 00:18:04.782 "trtype": "TCP", 00:18:04.782 "adrfam": "IPv4", 00:18:04.782 "traddr": "10.0.0.2", 00:18:04.782 "trsvcid": "4420" 00:18:04.782 }, 00:18:04.782 "peer_address": { 00:18:04.782 "trtype": "TCP", 00:18:04.782 "adrfam": "IPv4", 00:18:04.782 "traddr": "10.0.0.1", 00:18:04.782 "trsvcid": "35788" 00:18:04.782 }, 00:18:04.782 "auth": { 00:18:04.782 "state": "completed", 00:18:04.782 "digest": "sha512", 00:18:04.782 "dhgroup": "null" 00:18:04.782 } 00:18:04.782 } 00:18:04.782 ]' 00:18:04.782 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.782 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.782 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.782 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:04.782 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.041 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.041 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.041 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.041 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:18:05.041 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:18:05.609 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.609 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:05.609 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.609 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.609 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.609 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.609 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.609 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:05.609 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:05.868 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:05.868 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.868 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.868 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:05.868 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:05.868 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.868 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.868 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.868 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.868 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.868 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.868 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.868 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.127 00:18:06.127 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.127 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.127 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.386 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.386 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.386 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.386 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.386 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.386 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.386 { 00:18:06.386 "cntlid": 105, 00:18:06.386 "qid": 0, 00:18:06.386 "state": "enabled", 00:18:06.386 "thread": "nvmf_tgt_poll_group_000", 00:18:06.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:06.386 "listen_address": { 00:18:06.386 "trtype": "TCP", 00:18:06.386 "adrfam": "IPv4", 00:18:06.386 "traddr": "10.0.0.2", 00:18:06.386 "trsvcid": "4420" 00:18:06.386 }, 00:18:06.386 "peer_address": { 00:18:06.386 "trtype": "TCP", 00:18:06.386 "adrfam": "IPv4", 00:18:06.386 "traddr": "10.0.0.1", 00:18:06.386 "trsvcid": "35816" 00:18:06.386 }, 00:18:06.386 "auth": { 00:18:06.386 "state": "completed", 00:18:06.386 "digest": "sha512", 00:18:06.386 "dhgroup": "ffdhe2048" 00:18:06.386 } 00:18:06.386 } 00:18:06.386 ]' 00:18:06.386 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.386 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.386 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.386 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:06.386 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.645 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.645 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.645 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.645 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:18:06.645 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:18:07.212 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.212 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:07.212 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.212 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.212 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.212 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.212 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:07.212 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:07.471 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:07.471 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.471 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.472 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:07.472 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:07.472 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.472 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.472 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.472 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.472 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.472 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.472 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.472 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.731 00:18:07.731 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.731 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.731 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.989 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.989 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.989 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.989 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.989 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.989 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.989 { 00:18:07.989 "cntlid": 107, 00:18:07.989 "qid": 0, 00:18:07.989 "state": "enabled", 00:18:07.990 "thread": "nvmf_tgt_poll_group_000", 00:18:07.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:07.990 "listen_address": { 00:18:07.990 "trtype": "TCP", 00:18:07.990 "adrfam": "IPv4", 00:18:07.990 "traddr": "10.0.0.2", 00:18:07.990 "trsvcid": "4420" 00:18:07.990 }, 00:18:07.990 "peer_address": { 00:18:07.990 "trtype": "TCP", 00:18:07.990 "adrfam": "IPv4", 00:18:07.990 "traddr": "10.0.0.1", 00:18:07.990 "trsvcid": "35864" 00:18:07.990 }, 00:18:07.990 "auth": { 00:18:07.990 "state": "completed", 00:18:07.990 "digest": "sha512", 00:18:07.990 "dhgroup": "ffdhe2048" 00:18:07.990 } 00:18:07.990 } 00:18:07.990 ]' 00:18:07.990 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.990 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.990 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.990 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:07.990 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.249 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.249 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.249 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.249 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:18:08.249 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:18:08.817 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.817 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:08.817 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.817 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.817 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.817 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.817 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:08.817 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:09.076 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:09.076 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.076 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.076 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:09.076 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:09.076 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.076 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.076 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.076 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.076 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.076 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.076 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.076 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.334 00:18:09.334 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.334 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.334 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.592 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.592 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.592 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.592 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.592 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.592 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.592 { 00:18:09.592 "cntlid": 109, 00:18:09.592 "qid": 0, 00:18:09.593 "state": "enabled", 00:18:09.593 "thread": "nvmf_tgt_poll_group_000", 00:18:09.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:09.593 "listen_address": { 00:18:09.593 "trtype": "TCP", 00:18:09.593 "adrfam": "IPv4", 00:18:09.593 "traddr": "10.0.0.2", 00:18:09.593 "trsvcid": "4420" 00:18:09.593 }, 00:18:09.593 "peer_address": { 00:18:09.593 "trtype": "TCP", 00:18:09.593 "adrfam": "IPv4", 00:18:09.593 "traddr": "10.0.0.1", 00:18:09.593 "trsvcid": "35894" 00:18:09.593 }, 00:18:09.593 "auth": { 00:18:09.593 "state": "completed", 00:18:09.593 "digest": "sha512", 00:18:09.593 "dhgroup": "ffdhe2048" 00:18:09.593 } 00:18:09.593 } 00:18:09.593 ]' 00:18:09.593 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.593 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.593 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.593 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:09.593 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.593 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.593 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.593 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.851 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:18:09.851 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:18:10.418 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.418 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:10.418 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.418 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.418 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.418 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.418 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:10.418 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:10.676 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:10.676 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.676 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.676 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:10.676 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:10.676 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.676 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:10.676 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.676 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.676 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.676 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:10.676 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.676 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.935 00:18:10.935 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.935 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.935 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.193 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.193 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.193 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.193 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.193 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.193 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.193 { 00:18:11.194 "cntlid": 111, 00:18:11.194 "qid": 0, 00:18:11.194 "state": "enabled", 00:18:11.194 "thread": "nvmf_tgt_poll_group_000", 00:18:11.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:11.194 "listen_address": { 00:18:11.194 "trtype": "TCP", 00:18:11.194 "adrfam": "IPv4", 00:18:11.194 "traddr": "10.0.0.2", 00:18:11.194 "trsvcid": "4420" 00:18:11.194 }, 00:18:11.194 "peer_address": { 00:18:11.194 "trtype": "TCP", 00:18:11.194 "adrfam": "IPv4", 00:18:11.194 "traddr": "10.0.0.1", 00:18:11.194 "trsvcid": "35908" 00:18:11.194 }, 00:18:11.194 "auth": { 00:18:11.194 "state": "completed", 00:18:11.194 "digest": "sha512", 00:18:11.194 "dhgroup": "ffdhe2048" 00:18:11.194 } 00:18:11.194 } 00:18:11.194 ]' 00:18:11.194 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.194 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.194 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.194 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:11.194 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.194 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.194 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.194 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.452 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:18:11.452 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:18:12.023 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.023 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:12.023 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.023 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.023 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.023 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.023 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.023 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.023 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.282 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:12.282 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.282 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.282 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:12.282 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:12.282 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.282 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.282 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.282 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.282 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.282 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.282 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.282 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.541 00:18:12.541 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.541 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.541 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.800 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.800 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.800 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.800 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.800 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.800 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.800 { 00:18:12.800 "cntlid": 113, 00:18:12.800 "qid": 0, 00:18:12.800 "state": "enabled", 00:18:12.800 "thread": "nvmf_tgt_poll_group_000", 00:18:12.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:12.800 "listen_address": { 00:18:12.800 "trtype": "TCP", 00:18:12.800 "adrfam": "IPv4", 00:18:12.800 "traddr": "10.0.0.2", 00:18:12.800 "trsvcid": "4420" 00:18:12.800 }, 00:18:12.800 "peer_address": { 00:18:12.800 "trtype": "TCP", 00:18:12.800 "adrfam": "IPv4", 00:18:12.800 "traddr": "10.0.0.1", 00:18:12.800 "trsvcid": "35950" 00:18:12.800 }, 00:18:12.800 "auth": { 00:18:12.800 "state": "completed", 00:18:12.800 "digest": "sha512", 00:18:12.800 "dhgroup": "ffdhe3072" 00:18:12.800 } 00:18:12.800 } 00:18:12.800 ]' 00:18:12.800 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.800 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.800 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.800 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:12.800 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.800 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.800 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.800 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.058 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:18:13.058 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:18:13.626 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.626 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:13.626 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.626 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.626 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.626 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.626 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:13.626 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:13.885 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:13.885 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.885 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.885 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:13.885 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:13.885 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.885 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.885 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.885 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.885 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.885 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.885 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.885 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.144 00:18:14.144 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.144 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.144 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.403 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.403 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.403 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.403 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.403 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.403 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.403 { 00:18:14.403 "cntlid": 115, 00:18:14.403 "qid": 0, 00:18:14.403 "state": "enabled", 00:18:14.403 "thread": "nvmf_tgt_poll_group_000", 00:18:14.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:14.403 "listen_address": { 00:18:14.403 "trtype": "TCP", 00:18:14.403 "adrfam": "IPv4", 00:18:14.403 "traddr": "10.0.0.2", 00:18:14.403 "trsvcid": "4420" 00:18:14.403 }, 00:18:14.403 "peer_address": { 00:18:14.403 "trtype": "TCP", 00:18:14.403 "adrfam": "IPv4", 00:18:14.403 "traddr": "10.0.0.1", 00:18:14.403 "trsvcid": "33298" 00:18:14.403 }, 00:18:14.403 "auth": { 00:18:14.403 "state": "completed", 00:18:14.403 "digest": "sha512", 00:18:14.403 "dhgroup": "ffdhe3072" 00:18:14.403 } 00:18:14.403 } 00:18:14.403 ]' 00:18:14.403 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.403 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.403 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.403 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:14.403 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.403 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.403 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.403 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.661 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:18:14.661 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:18:15.228 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.228 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:15.228 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.228 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.228 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.228 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.228 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:15.228 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:15.487 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:15.487 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.487 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.487 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:15.487 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:15.487 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.487 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.487 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.487 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.487 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.487 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.487 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.487 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.746 00:18:15.746 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.746 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.746 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.005 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.005 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.005 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.005 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.005 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.005 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.005 { 00:18:16.005 "cntlid": 117, 00:18:16.005 "qid": 0, 00:18:16.005 "state": "enabled", 00:18:16.005 "thread": "nvmf_tgt_poll_group_000", 00:18:16.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:16.005 "listen_address": { 00:18:16.005 "trtype": "TCP", 00:18:16.005 "adrfam": "IPv4", 00:18:16.005 "traddr": "10.0.0.2", 00:18:16.005 "trsvcid": "4420" 00:18:16.005 }, 00:18:16.005 "peer_address": { 00:18:16.005 "trtype": "TCP", 00:18:16.005 "adrfam": "IPv4", 00:18:16.005 "traddr": "10.0.0.1", 00:18:16.005 "trsvcid": "33324" 00:18:16.005 }, 00:18:16.005 "auth": { 00:18:16.005 "state": "completed", 00:18:16.005 "digest": "sha512", 00:18:16.005 "dhgroup": "ffdhe3072" 00:18:16.005 } 00:18:16.005 } 00:18:16.005 ]' 00:18:16.005 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.005 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.005 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.005 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:16.005 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.005 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.005 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.005 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.264 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:18:16.264 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:18:16.831 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.831 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:16.831 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.831 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.831 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.831 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.831 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:16.831 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:17.090 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:17.090 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.090 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.090 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:17.090 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:17.090 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.090 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:17.090 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.090 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.090 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.090 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:17.090 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.090 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.349 00:18:17.349 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.349 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.349 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.608 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.608 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.608 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.608 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.608 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.608 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.608 { 00:18:17.608 "cntlid": 119, 00:18:17.608 "qid": 0, 00:18:17.608 "state": "enabled", 00:18:17.608 "thread": "nvmf_tgt_poll_group_000", 00:18:17.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:17.608 "listen_address": { 00:18:17.608 "trtype": "TCP", 00:18:17.608 "adrfam": "IPv4", 00:18:17.608 "traddr": "10.0.0.2", 00:18:17.608 "trsvcid": "4420" 00:18:17.608 }, 00:18:17.608 "peer_address": { 00:18:17.608 "trtype": "TCP", 00:18:17.608 "adrfam": "IPv4", 00:18:17.608 "traddr": "10.0.0.1", 00:18:17.608 "trsvcid": "33336" 00:18:17.608 }, 00:18:17.608 "auth": { 00:18:17.608 "state": "completed", 00:18:17.608 "digest": "sha512", 00:18:17.608 "dhgroup": "ffdhe3072" 00:18:17.608 } 00:18:17.608 } 00:18:17.608 ]' 00:18:17.608 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.608 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.608 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.608 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:17.608 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.608 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.608 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.608 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.867 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:18:17.867 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:18:18.435 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.435 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:18.435 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.435 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.435 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.435 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.435 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.435 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:18.435 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:18.694 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:18.694 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.694 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.694 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:18.694 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:18.694 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.694 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.694 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.694 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.694 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.694 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.694 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.694 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.953 00:18:18.953 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.953 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.953 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.212 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.212 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.212 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.212 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.212 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.212 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.212 { 00:18:19.212 "cntlid": 121, 00:18:19.212 "qid": 0, 00:18:19.212 "state": "enabled", 00:18:19.212 "thread": "nvmf_tgt_poll_group_000", 00:18:19.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:19.212 "listen_address": { 00:18:19.212 "trtype": "TCP", 00:18:19.212 "adrfam": "IPv4", 00:18:19.212 "traddr": "10.0.0.2", 00:18:19.212 "trsvcid": "4420" 00:18:19.212 }, 00:18:19.212 "peer_address": { 00:18:19.212 "trtype": "TCP", 00:18:19.212 "adrfam": "IPv4", 00:18:19.212 "traddr": "10.0.0.1", 00:18:19.212 "trsvcid": "33370" 00:18:19.212 }, 00:18:19.212 "auth": { 00:18:19.212 "state": "completed", 00:18:19.212 "digest": "sha512", 00:18:19.212 "dhgroup": "ffdhe4096" 00:18:19.212 } 00:18:19.212 } 00:18:19.212 ]' 00:18:19.212 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.212 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.212 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.212 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:19.212 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.212 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.212 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.212 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.470 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:18:19.470 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:18:20.038 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.038 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:20.038 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.038 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.038 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.038 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.038 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:20.038 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:20.296 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:20.296 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.297 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.297 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:20.297 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:20.297 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.297 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.297 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.297 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.297 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.297 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.297 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.297 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.555 00:18:20.555 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.555 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.555 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.814 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.814 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.814 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.814 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.814 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.814 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.814 { 00:18:20.814 "cntlid": 123, 00:18:20.814 "qid": 0, 00:18:20.814 "state": "enabled", 00:18:20.814 "thread": "nvmf_tgt_poll_group_000", 00:18:20.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:20.814 "listen_address": { 00:18:20.814 "trtype": "TCP", 00:18:20.814 "adrfam": "IPv4", 00:18:20.814 "traddr": "10.0.0.2", 00:18:20.814 "trsvcid": "4420" 00:18:20.814 }, 00:18:20.814 "peer_address": { 00:18:20.814 "trtype": "TCP", 00:18:20.814 "adrfam": "IPv4", 00:18:20.814 "traddr": "10.0.0.1", 00:18:20.814 "trsvcid": "33400" 00:18:20.814 }, 00:18:20.814 "auth": { 00:18:20.814 "state": "completed", 00:18:20.814 "digest": "sha512", 00:18:20.814 "dhgroup": "ffdhe4096" 00:18:20.814 } 00:18:20.814 } 00:18:20.814 ]' 00:18:20.814 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.814 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.814 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.814 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:20.814 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.814 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.814 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.814 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.074 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:18:21.074 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:18:21.642 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.642 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:21.642 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.642 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.642 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.642 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.642 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:21.642 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:21.901 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:21.901 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.901 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.901 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:21.901 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:21.901 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.901 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.901 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.901 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.901 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.901 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.901 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.901 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.160 00:18:22.160 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.160 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.160 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.419 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.419 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.419 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.419 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.419 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.419 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.419 { 00:18:22.419 "cntlid": 125, 00:18:22.419 "qid": 0, 00:18:22.419 "state": "enabled", 00:18:22.419 "thread": "nvmf_tgt_poll_group_000", 00:18:22.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:22.419 "listen_address": { 00:18:22.419 "trtype": "TCP", 00:18:22.419 "adrfam": "IPv4", 00:18:22.419 "traddr": "10.0.0.2", 00:18:22.419 "trsvcid": "4420" 00:18:22.419 }, 00:18:22.419 "peer_address": { 00:18:22.419 "trtype": "TCP", 00:18:22.419 "adrfam": "IPv4", 00:18:22.419 "traddr": "10.0.0.1", 00:18:22.419 "trsvcid": "33426" 00:18:22.419 }, 00:18:22.419 "auth": { 00:18:22.419 "state": "completed", 00:18:22.419 "digest": "sha512", 00:18:22.419 "dhgroup": "ffdhe4096" 00:18:22.419 } 00:18:22.419 } 00:18:22.419 ]' 00:18:22.419 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.419 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.419 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.419 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:22.419 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.419 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.419 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.419 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.678 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:18:22.678 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:18:23.245 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.245 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:23.245 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.245 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.245 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.245 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.245 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:23.245 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:23.504 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:23.504 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.504 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.504 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:23.504 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:23.504 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.504 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:23.504 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.504 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.504 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.504 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:23.504 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.504 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.763 00:18:23.763 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.763 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.763 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.022 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.022 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.022 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.022 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.022 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.022 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.022 { 00:18:24.022 "cntlid": 127, 00:18:24.022 "qid": 0, 00:18:24.022 "state": "enabled", 00:18:24.022 "thread": "nvmf_tgt_poll_group_000", 00:18:24.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:24.022 "listen_address": { 00:18:24.022 "trtype": "TCP", 00:18:24.022 "adrfam": "IPv4", 00:18:24.022 "traddr": "10.0.0.2", 00:18:24.022 "trsvcid": "4420" 00:18:24.022 }, 00:18:24.022 "peer_address": { 00:18:24.022 "trtype": "TCP", 00:18:24.022 "adrfam": "IPv4", 00:18:24.022 "traddr": "10.0.0.1", 00:18:24.022 "trsvcid": "39456" 00:18:24.022 }, 00:18:24.022 "auth": { 00:18:24.022 "state": "completed", 00:18:24.022 "digest": "sha512", 00:18:24.022 "dhgroup": "ffdhe4096" 00:18:24.022 } 00:18:24.022 } 00:18:24.022 ]' 00:18:24.022 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.022 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.022 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.022 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:24.022 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.022 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.022 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.022 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.281 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:18:24.281 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.966 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.533 00:18:25.533 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.533 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.533 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.533 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.533 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.533 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.533 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.533 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.533 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.533 { 00:18:25.533 "cntlid": 129, 00:18:25.533 "qid": 0, 00:18:25.533 "state": "enabled", 00:18:25.533 "thread": "nvmf_tgt_poll_group_000", 00:18:25.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:25.533 "listen_address": { 00:18:25.533 "trtype": "TCP", 00:18:25.533 "adrfam": "IPv4", 00:18:25.533 "traddr": "10.0.0.2", 00:18:25.533 "trsvcid": "4420" 00:18:25.533 }, 00:18:25.533 "peer_address": { 00:18:25.533 "trtype": "TCP", 00:18:25.533 "adrfam": "IPv4", 00:18:25.533 "traddr": "10.0.0.1", 00:18:25.533 "trsvcid": "39484" 00:18:25.533 }, 00:18:25.533 "auth": { 00:18:25.533 "state": "completed", 00:18:25.533 "digest": "sha512", 00:18:25.533 "dhgroup": "ffdhe6144" 00:18:25.534 } 00:18:25.534 } 00:18:25.534 ]' 00:18:25.534 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.792 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.792 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.792 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.792 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.792 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.792 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.792 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.050 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:18:26.050 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.617 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.184 00:18:27.184 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.184 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.184 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.184 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.184 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.184 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.184 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.184 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.184 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.184 { 00:18:27.184 "cntlid": 131, 00:18:27.184 "qid": 0, 00:18:27.184 "state": "enabled", 00:18:27.184 "thread": "nvmf_tgt_poll_group_000", 00:18:27.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:27.184 "listen_address": { 00:18:27.184 "trtype": "TCP", 00:18:27.184 "adrfam": "IPv4", 00:18:27.184 "traddr": "10.0.0.2", 00:18:27.184 "trsvcid": "4420" 00:18:27.184 }, 00:18:27.184 "peer_address": { 00:18:27.184 "trtype": "TCP", 00:18:27.184 "adrfam": "IPv4", 00:18:27.184 "traddr": "10.0.0.1", 00:18:27.184 "trsvcid": "39504" 00:18:27.184 }, 00:18:27.184 "auth": { 00:18:27.184 "state": "completed", 00:18:27.184 "digest": "sha512", 00:18:27.184 "dhgroup": "ffdhe6144" 00:18:27.184 } 00:18:27.184 } 00:18:27.184 ]' 00:18:27.184 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.184 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.184 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.443 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:27.443 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.443 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.443 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.443 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.701 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:18:27.701 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:18:28.269 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.269 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:28.269 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.269 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.269 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.269 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.269 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:28.269 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:28.269 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:28.269 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.269 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.269 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:28.269 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:28.269 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.269 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.269 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.269 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.527 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.527 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.527 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.528 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.786 00:18:28.786 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.786 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.786 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.045 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.045 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.045 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.045 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.045 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.045 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.045 { 00:18:29.045 "cntlid": 133, 00:18:29.045 "qid": 0, 00:18:29.045 "state": "enabled", 00:18:29.045 "thread": "nvmf_tgt_poll_group_000", 00:18:29.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:29.045 "listen_address": { 00:18:29.045 "trtype": "TCP", 00:18:29.045 "adrfam": "IPv4", 00:18:29.045 "traddr": "10.0.0.2", 00:18:29.045 "trsvcid": "4420" 00:18:29.045 }, 00:18:29.045 "peer_address": { 00:18:29.045 "trtype": "TCP", 00:18:29.045 "adrfam": "IPv4", 00:18:29.045 "traddr": "10.0.0.1", 00:18:29.045 "trsvcid": "39534" 00:18:29.045 }, 00:18:29.045 "auth": { 00:18:29.045 "state": "completed", 00:18:29.045 "digest": "sha512", 00:18:29.045 "dhgroup": "ffdhe6144" 00:18:29.045 } 00:18:29.045 } 00:18:29.045 ]' 00:18:29.045 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.045 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.045 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.046 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:29.046 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.046 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.046 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.046 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.304 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:18:29.304 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:18:29.871 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.871 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:29.871 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.871 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.871 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.871 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.871 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:29.871 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:30.129 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:30.129 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.129 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:30.129 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:30.129 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:30.129 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.129 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:30.129 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.129 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.129 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.129 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:30.129 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.129 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.387 00:18:30.387 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.387 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.387 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.646 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.646 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.646 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.646 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.646 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.646 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.646 { 00:18:30.646 "cntlid": 135, 00:18:30.646 "qid": 0, 00:18:30.646 "state": "enabled", 00:18:30.646 "thread": "nvmf_tgt_poll_group_000", 00:18:30.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:30.646 "listen_address": { 00:18:30.646 "trtype": "TCP", 00:18:30.646 "adrfam": "IPv4", 00:18:30.646 "traddr": "10.0.0.2", 00:18:30.646 "trsvcid": "4420" 00:18:30.646 }, 00:18:30.646 "peer_address": { 00:18:30.646 "trtype": "TCP", 00:18:30.646 "adrfam": "IPv4", 00:18:30.646 "traddr": "10.0.0.1", 00:18:30.646 "trsvcid": "39556" 00:18:30.646 }, 00:18:30.646 "auth": { 00:18:30.646 "state": "completed", 00:18:30.646 "digest": "sha512", 00:18:30.646 "dhgroup": "ffdhe6144" 00:18:30.646 } 00:18:30.646 } 00:18:30.646 ]' 00:18:30.646 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.646 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.646 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.646 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:30.646 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.904 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.904 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.904 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.904 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:18:30.904 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:18:31.470 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.470 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:31.470 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.470 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.470 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.470 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.470 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.470 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:31.470 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:31.729 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:31.729 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.729 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:31.729 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:31.729 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:31.729 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.729 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.729 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.729 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.729 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.729 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.729 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.729 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.296 00:18:32.296 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.296 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.296 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.555 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.555 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.555 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.555 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.555 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.555 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.555 { 00:18:32.555 "cntlid": 137, 00:18:32.555 "qid": 0, 00:18:32.555 "state": "enabled", 00:18:32.555 "thread": "nvmf_tgt_poll_group_000", 00:18:32.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:32.555 "listen_address": { 00:18:32.555 "trtype": "TCP", 00:18:32.555 "adrfam": "IPv4", 00:18:32.555 "traddr": "10.0.0.2", 00:18:32.555 "trsvcid": "4420" 00:18:32.555 }, 00:18:32.555 "peer_address": { 00:18:32.555 "trtype": "TCP", 00:18:32.555 "adrfam": "IPv4", 00:18:32.555 "traddr": "10.0.0.1", 00:18:32.555 "trsvcid": "39574" 00:18:32.555 }, 00:18:32.555 "auth": { 00:18:32.555 "state": "completed", 00:18:32.555 "digest": "sha512", 00:18:32.555 "dhgroup": "ffdhe8192" 00:18:32.555 } 00:18:32.555 } 00:18:32.555 ]' 00:18:32.555 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.555 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.555 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.555 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.555 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.555 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.555 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.555 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.813 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:18:32.813 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:18:33.380 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.380 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:33.380 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.380 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.380 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.380 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.380 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:33.380 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:33.639 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:33.639 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.639 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:33.639 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:33.639 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:33.639 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.639 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.639 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.639 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.639 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.639 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.639 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.639 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.206 00:18:34.206 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.206 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.206 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.206 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.206 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.206 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.206 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.206 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.206 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.206 { 00:18:34.206 "cntlid": 139, 00:18:34.206 "qid": 0, 00:18:34.206 "state": "enabled", 00:18:34.206 "thread": "nvmf_tgt_poll_group_000", 00:18:34.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:34.206 "listen_address": { 00:18:34.206 "trtype": "TCP", 00:18:34.206 "adrfam": "IPv4", 00:18:34.206 "traddr": "10.0.0.2", 00:18:34.206 "trsvcid": "4420" 00:18:34.206 }, 00:18:34.206 "peer_address": { 00:18:34.206 "trtype": "TCP", 00:18:34.206 "adrfam": "IPv4", 00:18:34.206 "traddr": "10.0.0.1", 00:18:34.206 "trsvcid": "58542" 00:18:34.206 }, 00:18:34.206 "auth": { 00:18:34.206 "state": "completed", 00:18:34.206 "digest": "sha512", 00:18:34.206 "dhgroup": "ffdhe8192" 00:18:34.206 } 00:18:34.206 } 00:18:34.206 ]' 00:18:34.206 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.206 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.206 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.465 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.465 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.465 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.465 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.465 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.723 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:18:34.723 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: --dhchap-ctrl-secret DHHC-1:02:NGQ2MDNlMzYzYmY2MDk1YjJhZWQ2YmEwMjA4Mzg2YWMzY2FmZGJmYTg5ZDlkMjVm9KKr/g==: 00:18:35.290 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.290 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:35.290 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.290 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.290 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.290 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.290 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:35.290 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:35.290 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:35.290 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.290 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:35.290 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:35.290 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:35.290 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.290 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.290 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.290 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.290 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.290 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.290 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.290 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.857 00:18:35.857 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.857 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.857 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.116 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.116 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.116 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.116 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.116 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.116 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.116 { 00:18:36.116 "cntlid": 141, 00:18:36.116 "qid": 0, 00:18:36.116 "state": "enabled", 00:18:36.116 "thread": "nvmf_tgt_poll_group_000", 00:18:36.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:36.116 "listen_address": { 00:18:36.116 "trtype": "TCP", 00:18:36.116 "adrfam": "IPv4", 00:18:36.116 "traddr": "10.0.0.2", 00:18:36.116 "trsvcid": "4420" 00:18:36.116 }, 00:18:36.116 "peer_address": { 00:18:36.116 "trtype": "TCP", 00:18:36.116 "adrfam": "IPv4", 00:18:36.116 "traddr": "10.0.0.1", 00:18:36.116 "trsvcid": "58558" 00:18:36.116 }, 00:18:36.116 "auth": { 00:18:36.116 "state": "completed", 00:18:36.116 "digest": "sha512", 00:18:36.116 "dhgroup": "ffdhe8192" 00:18:36.116 } 00:18:36.116 } 00:18:36.116 ]' 00:18:36.116 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.116 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.116 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.116 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.116 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.116 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.116 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.116 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.375 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:18:36.375 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:01:ODgyZWMwYTNmMGZkZjg0ZDU5MWFjNjcxY2E2ODBkMTLNS/ZI: 00:18:36.942 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.942 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:36.942 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.942 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.942 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.942 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.942 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:36.942 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:37.201 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:37.201 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.201 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:37.201 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:37.201 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:37.201 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.201 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:37.201 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.201 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.201 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.201 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:37.201 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:37.201 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:37.769 00:18:37.769 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.769 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.769 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.028 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.028 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.028 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.028 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.028 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.028 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.028 { 00:18:38.028 "cntlid": 143, 00:18:38.028 "qid": 0, 00:18:38.028 "state": "enabled", 00:18:38.028 "thread": "nvmf_tgt_poll_group_000", 00:18:38.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:38.028 "listen_address": { 00:18:38.028 "trtype": "TCP", 00:18:38.028 "adrfam": "IPv4", 00:18:38.028 "traddr": "10.0.0.2", 00:18:38.028 "trsvcid": "4420" 00:18:38.028 }, 00:18:38.028 "peer_address": { 00:18:38.028 "trtype": "TCP", 00:18:38.028 "adrfam": "IPv4", 00:18:38.028 "traddr": "10.0.0.1", 00:18:38.028 "trsvcid": "58578" 00:18:38.028 }, 00:18:38.028 "auth": { 00:18:38.028 "state": "completed", 00:18:38.028 "digest": "sha512", 00:18:38.028 "dhgroup": "ffdhe8192" 00:18:38.028 } 00:18:38.028 } 00:18:38.028 ]' 00:18:38.028 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.028 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.028 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.028 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:38.028 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.028 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.028 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.028 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.287 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:18:38.287 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:18:38.854 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.854 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:38.854 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.854 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.854 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.854 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:38.854 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:38.854 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:38.854 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:38.854 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:38.854 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:39.113 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:39.113 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.113 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.113 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:39.113 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:39.113 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.113 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.113 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.113 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.113 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.113 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.113 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.113 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.679 00:18:39.679 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.679 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.679 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.679 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.679 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.679 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.679 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.679 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.679 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.679 { 00:18:39.679 "cntlid": 145, 00:18:39.679 "qid": 0, 00:18:39.679 "state": "enabled", 00:18:39.679 "thread": "nvmf_tgt_poll_group_000", 00:18:39.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:39.679 "listen_address": { 00:18:39.679 "trtype": "TCP", 00:18:39.679 "adrfam": "IPv4", 00:18:39.679 "traddr": "10.0.0.2", 00:18:39.679 "trsvcid": "4420" 00:18:39.679 }, 00:18:39.679 "peer_address": { 00:18:39.679 "trtype": "TCP", 00:18:39.679 "adrfam": "IPv4", 00:18:39.679 "traddr": "10.0.0.1", 00:18:39.679 "trsvcid": "58608" 00:18:39.679 }, 00:18:39.679 "auth": { 00:18:39.679 "state": "completed", 00:18:39.679 "digest": "sha512", 00:18:39.679 "dhgroup": "ffdhe8192" 00:18:39.679 } 00:18:39.679 } 00:18:39.679 ]' 00:18:39.679 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.679 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.679 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.939 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:39.939 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.939 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.939 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.939 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.197 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:18:40.197 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2VjMTUyMWNhMzEwMzA2ZmU1NDE3NTIzYjk3NmJmN2RkOTcxZGUzYTczMGQyZjRj176PhA==: --dhchap-ctrl-secret DHHC-1:03:MGE4ODcxNGFkNDE1ZWI3MTlmYWFjYmZhMzgwZWE3YmFiMjY1NmYxZjhiYzY3MTAzNDVmNzI0ZGIwZmI3M2U3ND8NrxY=: 00:18:40.764 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:40.765 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:41.023 request: 00:18:41.023 { 00:18:41.023 "name": "nvme0", 00:18:41.023 "trtype": "tcp", 00:18:41.023 "traddr": "10.0.0.2", 00:18:41.023 "adrfam": "ipv4", 00:18:41.023 "trsvcid": "4420", 00:18:41.023 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:41.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:41.023 "prchk_reftag": false, 00:18:41.023 "prchk_guard": false, 00:18:41.023 "hdgst": false, 00:18:41.023 "ddgst": false, 00:18:41.023 "dhchap_key": "key2", 00:18:41.023 "allow_unrecognized_csi": false, 00:18:41.023 "method": "bdev_nvme_attach_controller", 00:18:41.023 "req_id": 1 00:18:41.023 } 00:18:41.023 Got JSON-RPC error response 00:18:41.023 response: 00:18:41.023 { 00:18:41.023 "code": -5, 00:18:41.023 "message": "Input/output error" 00:18:41.023 } 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:41.282 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:41.541 request: 00:18:41.541 { 00:18:41.541 "name": "nvme0", 00:18:41.541 "trtype": "tcp", 00:18:41.541 "traddr": "10.0.0.2", 00:18:41.541 "adrfam": "ipv4", 00:18:41.541 "trsvcid": "4420", 00:18:41.541 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:41.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:41.541 "prchk_reftag": false, 00:18:41.541 "prchk_guard": false, 00:18:41.541 "hdgst": false, 00:18:41.541 "ddgst": false, 00:18:41.541 "dhchap_key": "key1", 00:18:41.541 "dhchap_ctrlr_key": "ckey2", 00:18:41.541 "allow_unrecognized_csi": false, 00:18:41.541 "method": "bdev_nvme_attach_controller", 00:18:41.541 "req_id": 1 00:18:41.541 } 00:18:41.541 Got JSON-RPC error response 00:18:41.541 response: 00:18:41.541 { 00:18:41.541 "code": -5, 00:18:41.541 "message": "Input/output error" 00:18:41.541 } 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.541 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.109 request: 00:18:42.109 { 00:18:42.109 "name": "nvme0", 00:18:42.109 "trtype": "tcp", 00:18:42.109 "traddr": "10.0.0.2", 00:18:42.109 "adrfam": "ipv4", 00:18:42.109 "trsvcid": "4420", 00:18:42.109 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:42.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:42.109 "prchk_reftag": false, 00:18:42.109 "prchk_guard": false, 00:18:42.109 "hdgst": false, 00:18:42.109 "ddgst": false, 00:18:42.109 "dhchap_key": "key1", 00:18:42.109 "dhchap_ctrlr_key": "ckey1", 00:18:42.109 "allow_unrecognized_csi": false, 00:18:42.109 "method": "bdev_nvme_attach_controller", 00:18:42.109 "req_id": 1 00:18:42.109 } 00:18:42.109 Got JSON-RPC error response 00:18:42.109 response: 00:18:42.109 { 00:18:42.109 "code": -5, 00:18:42.109 "message": "Input/output error" 00:18:42.109 } 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1623951 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1623951 ']' 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1623951 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1623951 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1623951' 00:18:42.109 killing process with pid 1623951 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1623951 00:18:42.109 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1623951 00:18:42.368 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:42.368 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:42.368 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.368 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.368 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1645622 00:18:42.368 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1645622 00:18:42.368 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:42.368 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1645622 ']' 00:18:42.368 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.368 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.368 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.368 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.368 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.303 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.303 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:43.303 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:43.303 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:43.303 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.303 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.303 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:43.303 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1645622 00:18:43.303 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1645622 ']' 00:18:43.303 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.303 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.303 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.303 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.303 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.562 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.562 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:43.562 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:43.562 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.562 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.562 null0 00:18:43.562 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.562 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:43.562 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qi2 00:18:43.562 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.562 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.562 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.562 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.8ha ]] 00:18:43.562 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8ha 00:18:43.562 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.562 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.562 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.562 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.fwp 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.3Sv ]] 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3Sv 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.qn9 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.uit ]] 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uit 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.5gk 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.563 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.499 nvme0n1 00:18:44.499 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.499 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.499 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.499 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.499 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.499 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.499 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.499 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.499 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.499 { 00:18:44.499 "cntlid": 1, 00:18:44.499 "qid": 0, 00:18:44.499 "state": "enabled", 00:18:44.499 "thread": "nvmf_tgt_poll_group_000", 00:18:44.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:44.499 "listen_address": { 00:18:44.499 "trtype": "TCP", 00:18:44.499 "adrfam": "IPv4", 00:18:44.499 "traddr": "10.0.0.2", 00:18:44.499 "trsvcid": "4420" 00:18:44.499 }, 00:18:44.499 "peer_address": { 00:18:44.499 "trtype": "TCP", 00:18:44.499 "adrfam": "IPv4", 00:18:44.499 "traddr": "10.0.0.1", 00:18:44.499 "trsvcid": "47738" 00:18:44.499 }, 00:18:44.499 "auth": { 00:18:44.499 "state": "completed", 00:18:44.499 "digest": "sha512", 00:18:44.499 "dhgroup": "ffdhe8192" 00:18:44.499 } 00:18:44.499 } 00:18:44.499 ]' 00:18:44.499 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.756 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.756 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.756 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:44.756 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.756 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.756 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.756 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.014 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:18:45.014 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.581 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:45.582 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.840 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.840 request: 00:18:45.840 { 00:18:45.840 "name": "nvme0", 00:18:45.840 "trtype": "tcp", 00:18:45.840 "traddr": "10.0.0.2", 00:18:45.840 "adrfam": "ipv4", 00:18:45.840 "trsvcid": "4420", 00:18:45.840 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:45.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:45.840 "prchk_reftag": false, 00:18:45.840 "prchk_guard": false, 00:18:45.840 "hdgst": false, 00:18:45.840 "ddgst": false, 00:18:45.840 "dhchap_key": "key3", 00:18:45.840 "allow_unrecognized_csi": false, 00:18:45.840 "method": "bdev_nvme_attach_controller", 00:18:45.840 "req_id": 1 00:18:45.840 } 00:18:45.840 Got JSON-RPC error response 00:18:45.840 response: 00:18:45.840 { 00:18:45.840 "code": -5, 00:18:45.840 "message": "Input/output error" 00:18:45.840 } 00:18:45.840 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:45.840 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:45.840 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:45.840 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:45.840 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:45.840 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:45.840 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:45.840 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:46.099 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:46.099 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:46.099 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:46.099 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:46.099 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.099 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:46.099 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.099 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:46.099 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:46.099 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:46.357 request: 00:18:46.357 { 00:18:46.357 "name": "nvme0", 00:18:46.357 "trtype": "tcp", 00:18:46.357 "traddr": "10.0.0.2", 00:18:46.357 "adrfam": "ipv4", 00:18:46.357 "trsvcid": "4420", 00:18:46.357 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:46.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:46.357 "prchk_reftag": false, 00:18:46.357 "prchk_guard": false, 00:18:46.357 "hdgst": false, 00:18:46.357 "ddgst": false, 00:18:46.357 "dhchap_key": "key3", 00:18:46.357 "allow_unrecognized_csi": false, 00:18:46.357 "method": "bdev_nvme_attach_controller", 00:18:46.357 "req_id": 1 00:18:46.357 } 00:18:46.357 Got JSON-RPC error response 00:18:46.357 response: 00:18:46.357 { 00:18:46.357 "code": -5, 00:18:46.357 "message": "Input/output error" 00:18:46.357 } 00:18:46.357 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:46.357 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:46.357 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:46.357 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:46.357 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:46.357 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:46.357 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:46.357 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:46.357 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:46.357 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:46.614 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:46.614 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.614 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.614 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.614 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:46.614 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.614 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.614 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.614 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:46.614 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:46.614 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:46.615 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:46.615 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.615 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:46.615 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.615 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:46.615 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:46.615 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:46.873 request: 00:18:46.873 { 00:18:46.873 "name": "nvme0", 00:18:46.873 "trtype": "tcp", 00:18:46.873 "traddr": "10.0.0.2", 00:18:46.873 "adrfam": "ipv4", 00:18:46.873 "trsvcid": "4420", 00:18:46.873 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:46.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:46.873 "prchk_reftag": false, 00:18:46.873 "prchk_guard": false, 00:18:46.873 "hdgst": false, 00:18:46.873 "ddgst": false, 00:18:46.873 "dhchap_key": "key0", 00:18:46.873 "dhchap_ctrlr_key": "key1", 00:18:46.873 "allow_unrecognized_csi": false, 00:18:46.873 "method": "bdev_nvme_attach_controller", 00:18:46.873 "req_id": 1 00:18:46.873 } 00:18:46.873 Got JSON-RPC error response 00:18:46.873 response: 00:18:46.873 { 00:18:46.873 "code": -5, 00:18:46.873 "message": "Input/output error" 00:18:46.873 } 00:18:46.873 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:46.873 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:46.873 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:46.873 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:46.873 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:46.873 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:46.873 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:47.131 nvme0n1 00:18:47.131 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:47.131 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:47.131 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.390 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.390 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.390 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.648 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:18:47.648 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.648 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.648 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.648 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:47.648 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:47.648 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:48.215 nvme0n1 00:18:48.215 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:48.215 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:48.215 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.473 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.473 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:48.473 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.473 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.473 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.473 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:48.473 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.473 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:48.732 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.732 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:18:48.732 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: --dhchap-ctrl-secret DHHC-1:03:ZGJiYTE0ZGJiZTlhOGI0OWYxYzU1MTA2MTU3YWNmZjEyYjljZTc2YjgyM2I5NjA2NjY2OWI4ODA4ZGI2MDM0Nq78svU=: 00:18:49.299 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:49.299 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:49.299 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:49.299 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:49.299 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:49.299 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:49.299 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:49.299 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.299 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.557 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:49.557 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:49.557 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:49.557 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:49.557 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.557 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:49.557 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.557 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:49.557 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:49.557 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:50.124 request: 00:18:50.124 { 00:18:50.124 "name": "nvme0", 00:18:50.124 "trtype": "tcp", 00:18:50.124 "traddr": "10.0.0.2", 00:18:50.124 "adrfam": "ipv4", 00:18:50.124 "trsvcid": "4420", 00:18:50.124 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:50.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:50.124 "prchk_reftag": false, 00:18:50.124 "prchk_guard": false, 00:18:50.124 "hdgst": false, 00:18:50.124 "ddgst": false, 00:18:50.124 "dhchap_key": "key1", 00:18:50.124 "allow_unrecognized_csi": false, 00:18:50.124 "method": "bdev_nvme_attach_controller", 00:18:50.124 "req_id": 1 00:18:50.124 } 00:18:50.124 Got JSON-RPC error response 00:18:50.124 response: 00:18:50.124 { 00:18:50.124 "code": -5, 00:18:50.124 "message": "Input/output error" 00:18:50.124 } 00:18:50.124 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:50.124 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:50.124 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:50.124 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:50.124 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:50.124 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:50.124 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:50.691 nvme0n1 00:18:50.691 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:50.691 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:50.691 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.950 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.950 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.950 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.209 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:51.209 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.209 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.209 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.209 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:51.209 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:51.209 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:51.468 nvme0n1 00:18:51.468 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:51.468 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:51.468 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.468 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.468 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.468 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.727 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:51.727 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.727 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.727 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.727 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: '' 2s 00:18:51.727 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:51.727 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:51.727 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: 00:18:51.727 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:51.727 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:51.727 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:51.727 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: ]] 00:18:51.727 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjkxZGY0YjcwZmYwZDc3OWVmN2ZkZTc3NTM4OTRjMTg8xNVh: 00:18:51.727 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:51.727 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:51.727 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: 2s 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: ]] 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTQyZDAzMmFlYjBmMmFkNWRkNTg3MWMxZTVkNTRmZTIwOWY4Y2I5MWI0NTgxMjgy+bYaoA==: 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:54.260 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:56.166 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:56.166 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:56.166 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:56.166 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:56.166 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:56.166 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:56.166 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:56.166 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.166 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:56.166 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.166 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.166 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.166 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:56.166 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:56.166 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:56.732 nvme0n1 00:18:56.732 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:56.732 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.732 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.732 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.732 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:56.732 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:57.300 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:57.300 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.300 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:57.300 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.300 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:57.300 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.300 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.300 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.300 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:57.300 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:57.559 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:57.559 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:57.559 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.817 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.817 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:57.818 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.818 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.818 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.818 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:57.818 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:57.818 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:57.818 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:57.818 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.818 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:57.818 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.818 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:57.818 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:58.076 request: 00:18:58.076 { 00:18:58.076 "name": "nvme0", 00:18:58.076 "dhchap_key": "key1", 00:18:58.076 "dhchap_ctrlr_key": "key3", 00:18:58.076 "method": "bdev_nvme_set_keys", 00:18:58.076 "req_id": 1 00:18:58.076 } 00:18:58.076 Got JSON-RPC error response 00:18:58.076 response: 00:18:58.076 { 00:18:58.076 "code": -13, 00:18:58.076 "message": "Permission denied" 00:18:58.076 } 00:18:58.335 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:58.335 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:58.335 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:58.335 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:58.335 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:58.335 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:58.335 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.335 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:58.335 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:59.713 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:59.713 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:59.713 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.713 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:59.713 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:59.713 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.713 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.713 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.713 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:59.713 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:59.713 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:00.281 nvme0n1 00:19:00.540 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:00.540 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.540 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.540 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.540 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:00.540 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:00.540 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:00.540 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:00.540 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:00.540 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:00.540 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:00.540 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:00.540 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:00.798 request: 00:19:00.798 { 00:19:00.798 "name": "nvme0", 00:19:00.798 "dhchap_key": "key2", 00:19:00.798 "dhchap_ctrlr_key": "key0", 00:19:00.798 "method": "bdev_nvme_set_keys", 00:19:00.798 "req_id": 1 00:19:00.798 } 00:19:00.798 Got JSON-RPC error response 00:19:00.798 response: 00:19:00.798 { 00:19:00.798 "code": -13, 00:19:00.798 "message": "Permission denied" 00:19:00.798 } 00:19:00.798 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:00.798 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:00.798 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:00.798 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:00.798 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:00.798 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:00.798 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.057 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:01.057 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:01.992 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:01.992 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:01.992 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.251 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:02.251 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:02.251 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:02.251 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1624190 00:19:02.251 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1624190 ']' 00:19:02.251 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1624190 00:19:02.251 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:02.251 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.251 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1624190 00:19:02.251 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:02.251 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:02.251 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1624190' 00:19:02.251 killing process with pid 1624190 00:19:02.251 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1624190 00:19:02.251 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1624190 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:02.820 rmmod nvme_tcp 00:19:02.820 rmmod nvme_fabrics 00:19:02.820 rmmod nvme_keyring 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1645622 ']' 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1645622 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1645622 ']' 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1645622 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1645622 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1645622' 00:19:02.820 killing process with pid 1645622 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1645622 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1645622 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.820 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.qi2 /tmp/spdk.key-sha256.fwp /tmp/spdk.key-sha384.qn9 /tmp/spdk.key-sha512.5gk /tmp/spdk.key-sha512.8ha /tmp/spdk.key-sha384.3Sv /tmp/spdk.key-sha256.uit '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:05.356 00:19:05.356 real 2m34.579s 00:19:05.356 user 5m53.930s 00:19:05.356 sys 0m24.748s 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.356 ************************************ 00:19:05.356 END TEST nvmf_auth_target 00:19:05.356 ************************************ 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:05.356 ************************************ 00:19:05.356 START TEST nvmf_bdevio_no_huge 00:19:05.356 ************************************ 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:05.356 * Looking for test storage... 00:19:05.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:05.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.356 --rc genhtml_branch_coverage=1 00:19:05.356 --rc genhtml_function_coverage=1 00:19:05.356 --rc genhtml_legend=1 00:19:05.356 --rc geninfo_all_blocks=1 00:19:05.356 --rc geninfo_unexecuted_blocks=1 00:19:05.356 00:19:05.356 ' 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:05.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.356 --rc genhtml_branch_coverage=1 00:19:05.356 --rc genhtml_function_coverage=1 00:19:05.356 --rc genhtml_legend=1 00:19:05.356 --rc geninfo_all_blocks=1 00:19:05.356 --rc geninfo_unexecuted_blocks=1 00:19:05.356 00:19:05.356 ' 00:19:05.356 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:05.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.356 --rc genhtml_branch_coverage=1 00:19:05.357 --rc genhtml_function_coverage=1 00:19:05.357 --rc genhtml_legend=1 00:19:05.357 --rc geninfo_all_blocks=1 00:19:05.357 --rc geninfo_unexecuted_blocks=1 00:19:05.357 00:19:05.357 ' 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:05.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.357 --rc genhtml_branch_coverage=1 00:19:05.357 --rc genhtml_function_coverage=1 00:19:05.357 --rc genhtml_legend=1 00:19:05.357 --rc geninfo_all_blocks=1 00:19:05.357 --rc geninfo_unexecuted_blocks=1 00:19:05.357 00:19:05.357 ' 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:05.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:05.357 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:12.048 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:12.048 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:12.048 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:12.048 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:12.048 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:12.048 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:12.048 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:12.049 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:12.049 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:12.049 Found net devices under 0000:af:00.0: cvl_0_0 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:12.049 Found net devices under 0000:af:00.1: cvl_0_1 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:12.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:19:12.049 00:19:12.049 --- 10.0.0.2 ping statistics --- 00:19:12.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.049 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:12.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:19:12.049 00:19:12.049 --- 10.0.0.1 ping statistics --- 00:19:12.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.049 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1653465 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1653465 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1653465 ']' 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.049 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:12.308 [2024-12-10 14:21:12.828190] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:19:12.308 [2024-12-10 14:21:12.828263] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:12.308 [2024-12-10 14:21:12.921372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:12.308 [2024-12-10 14:21:12.968079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.308 [2024-12-10 14:21:12.968111] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.308 [2024-12-10 14:21:12.968118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.308 [2024-12-10 14:21:12.968124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.308 [2024-12-10 14:21:12.968129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.308 [2024-12-10 14:21:12.969181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:12.308 [2024-12-10 14:21:12.969286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:12.308 [2024-12-10 14:21:12.969392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:12.308 [2024-12-10 14:21:12.969394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:13.242 [2024-12-10 14:21:13.721977] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:13.242 Malloc0 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.242 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:13.243 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.243 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:13.243 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.243 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:13.243 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.243 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:13.243 [2024-12-10 14:21:13.766232] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.243 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.243 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:13.243 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:13.243 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:13.243 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:13.243 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:13.243 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:13.243 { 00:19:13.243 "params": { 00:19:13.243 "name": "Nvme$subsystem", 00:19:13.243 "trtype": "$TEST_TRANSPORT", 00:19:13.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:13.243 "adrfam": "ipv4", 00:19:13.243 "trsvcid": "$NVMF_PORT", 00:19:13.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:13.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:13.243 "hdgst": ${hdgst:-false}, 00:19:13.243 "ddgst": ${ddgst:-false} 00:19:13.243 }, 00:19:13.243 "method": "bdev_nvme_attach_controller" 00:19:13.243 } 00:19:13.243 EOF 00:19:13.243 )") 00:19:13.243 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:13.243 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:13.243 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:13.243 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:13.243 "params": { 00:19:13.243 "name": "Nvme1", 00:19:13.243 "trtype": "tcp", 00:19:13.243 "traddr": "10.0.0.2", 00:19:13.243 "adrfam": "ipv4", 00:19:13.243 "trsvcid": "4420", 00:19:13.243 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.243 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:13.243 "hdgst": false, 00:19:13.243 "ddgst": false 00:19:13.243 }, 00:19:13.243 "method": "bdev_nvme_attach_controller" 00:19:13.243 }' 00:19:13.243 [2024-12-10 14:21:13.814873] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:19:13.243 [2024-12-10 14:21:13.814920] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1653712 ] 00:19:13.243 [2024-12-10 14:21:13.902474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:13.243 [2024-12-10 14:21:13.950179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.243 [2024-12-10 14:21:13.950291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.243 [2024-12-10 14:21:13.950291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.502 I/O targets: 00:19:13.502 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:13.502 00:19:13.502 00:19:13.502 CUnit - A unit testing framework for C - Version 2.1-3 00:19:13.502 http://cunit.sourceforge.net/ 00:19:13.502 00:19:13.502 00:19:13.502 Suite: bdevio tests on: Nvme1n1 00:19:13.502 Test: blockdev write read block ...passed 00:19:13.760 Test: blockdev write zeroes read block ...passed 00:19:13.760 Test: blockdev write zeroes read no split ...passed 00:19:13.760 Test: blockdev write zeroes read split ...passed 00:19:13.760 Test: blockdev write zeroes read split partial ...passed 00:19:13.760 Test: blockdev reset ...[2024-12-10 14:21:14.279094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:13.760 [2024-12-10 14:21:14.279155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cdef0 (9): Bad file descriptor 00:19:13.760 [2024-12-10 14:21:14.414568] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:13.760 passed 00:19:13.760 Test: blockdev write read 8 blocks ...passed 00:19:13.760 Test: blockdev write read size > 128k ...passed 00:19:13.760 Test: blockdev write read invalid size ...passed 00:19:13.760 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:13.760 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:13.760 Test: blockdev write read max offset ...passed 00:19:14.019 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:14.019 Test: blockdev writev readv 8 blocks ...passed 00:19:14.019 Test: blockdev writev readv 30 x 1block ...passed 00:19:14.019 Test: blockdev writev readv block ...passed 00:19:14.019 Test: blockdev writev readv size > 128k ...passed 00:19:14.019 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:14.019 Test: blockdev comparev and writev ...[2024-12-10 14:21:14.586953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.019 [2024-12-10 14:21:14.586982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:14.019 [2024-12-10 14:21:14.586997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.019 [2024-12-10 14:21:14.587005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.019 [2024-12-10 14:21:14.587239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.019 [2024-12-10 14:21:14.587250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:14.019 [2024-12-10 14:21:14.587262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.019 [2024-12-10 14:21:14.587270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:14.019 [2024-12-10 14:21:14.587492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.019 [2024-12-10 14:21:14.587503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:14.019 [2024-12-10 14:21:14.587515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.019 [2024-12-10 14:21:14.587528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:14.019 [2024-12-10 14:21:14.587753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.019 [2024-12-10 14:21:14.587763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:14.019 [2024-12-10 14:21:14.587775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.019 [2024-12-10 14:21:14.587782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:14.019 passed 00:19:14.019 Test: blockdev nvme passthru rw ...passed 00:19:14.019 Test: blockdev nvme passthru vendor specific ...[2024-12-10 14:21:14.670510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:14.019 [2024-12-10 14:21:14.670528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:14.019 [2024-12-10 14:21:14.670634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:14.019 [2024-12-10 14:21:14.670644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:14.019 [2024-12-10 14:21:14.670744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:14.019 [2024-12-10 14:21:14.670754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:14.019 [2024-12-10 14:21:14.670854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:14.019 [2024-12-10 14:21:14.670864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:14.019 passed 00:19:14.019 Test: blockdev nvme admin passthru ...passed 00:19:14.019 Test: blockdev copy ...passed 00:19:14.019 00:19:14.019 Run Summary: Type Total Ran Passed Failed Inactive 00:19:14.019 suites 1 1 n/a 0 0 00:19:14.019 tests 23 23 23 0 0 00:19:14.019 asserts 152 152 152 0 n/a 00:19:14.019 00:19:14.019 Elapsed time = 1.162 seconds 00:19:14.277 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.277 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.277 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:14.277 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.277 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:14.277 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:14.277 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:14.277 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:14.277 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:14.277 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:14.277 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:14.277 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:14.277 rmmod nvme_tcp 00:19:14.538 rmmod nvme_fabrics 00:19:14.538 rmmod nvme_keyring 00:19:14.538 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:14.538 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:14.538 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:14.538 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1653465 ']' 00:19:14.538 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1653465 00:19:14.538 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1653465 ']' 00:19:14.538 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1653465 00:19:14.538 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:14.538 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.538 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1653465 00:19:14.538 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:14.538 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:14.538 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1653465' 00:19:14.538 killing process with pid 1653465 00:19:14.538 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1653465 00:19:14.538 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1653465 00:19:14.806 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:14.806 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:14.806 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:14.806 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:14.806 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:14.806 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:14.806 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:14.806 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:14.806 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:14.806 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.806 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:14.806 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:17.341 00:19:17.341 real 0m11.790s 00:19:17.341 user 0m13.645s 00:19:17.341 sys 0m6.094s 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:17.341 ************************************ 00:19:17.341 END TEST nvmf_bdevio_no_huge 00:19:17.341 ************************************ 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:17.341 ************************************ 00:19:17.341 START TEST nvmf_tls 00:19:17.341 ************************************ 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:17.341 * Looking for test storage... 00:19:17.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.341 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:17.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.342 --rc genhtml_branch_coverage=1 00:19:17.342 --rc genhtml_function_coverage=1 00:19:17.342 --rc genhtml_legend=1 00:19:17.342 --rc geninfo_all_blocks=1 00:19:17.342 --rc geninfo_unexecuted_blocks=1 00:19:17.342 00:19:17.342 ' 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:17.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.342 --rc genhtml_branch_coverage=1 00:19:17.342 --rc genhtml_function_coverage=1 00:19:17.342 --rc genhtml_legend=1 00:19:17.342 --rc geninfo_all_blocks=1 00:19:17.342 --rc geninfo_unexecuted_blocks=1 00:19:17.342 00:19:17.342 ' 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:17.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.342 --rc genhtml_branch_coverage=1 00:19:17.342 --rc genhtml_function_coverage=1 00:19:17.342 --rc genhtml_legend=1 00:19:17.342 --rc geninfo_all_blocks=1 00:19:17.342 --rc geninfo_unexecuted_blocks=1 00:19:17.342 00:19:17.342 ' 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:17.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.342 --rc genhtml_branch_coverage=1 00:19:17.342 --rc genhtml_function_coverage=1 00:19:17.342 --rc genhtml_legend=1 00:19:17.342 --rc geninfo_all_blocks=1 00:19:17.342 --rc geninfo_unexecuted_blocks=1 00:19:17.342 00:19:17.342 ' 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:17.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:17.342 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:23.912 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:23.912 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.912 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:23.912 Found net devices under 0000:af:00.0: cvl_0_0 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:23.913 Found net devices under 0000:af:00.1: cvl_0_1 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:23.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:19:23.913 00:19:23.913 --- 10.0.0.2 ping statistics --- 00:19:23.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.913 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:19:23.913 00:19:23.913 --- 10.0.0.1 ping statistics --- 00:19:23.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.913 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1657743 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1657743 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1657743 ']' 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.913 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.913 [2024-12-10 14:21:24.645128] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:19:23.913 [2024-12-10 14:21:24.645170] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.172 [2024-12-10 14:21:24.728286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.172 [2024-12-10 14:21:24.767147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.172 [2024-12-10 14:21:24.767183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.172 [2024-12-10 14:21:24.767190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.172 [2024-12-10 14:21:24.767196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.172 [2024-12-10 14:21:24.767201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.172 [2024-12-10 14:21:24.767722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.172 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.172 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:24.172 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:24.172 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:24.172 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.172 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.172 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:24.172 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:24.430 true 00:19:24.431 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:24.431 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:24.689 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:24.689 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:24.689 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:24.689 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:24.689 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:24.947 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:24.947 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:24.947 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:25.206 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.206 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:25.464 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:25.464 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:25.464 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.464 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:25.464 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:25.464 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:25.464 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:25.723 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.723 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:25.981 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:25.981 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:25.981 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:25.981 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.981 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:26.240 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:26.240 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:26.240 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:26.240 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:26.240 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.240 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:26.240 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:26.240 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:26.240 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:26.240 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:26.240 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:26.240 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:26.240 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.240 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:26.240 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:26.240 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:26.240 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:26.505 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:26.505 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:26.505 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.XSkJgbsAxU 00:19:26.505 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:26.505 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.LuHqK89Fd8 00:19:26.505 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:26.505 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:26.505 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.XSkJgbsAxU 00:19:26.505 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.LuHqK89Fd8 00:19:26.505 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:26.505 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:26.764 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.XSkJgbsAxU 00:19:26.764 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XSkJgbsAxU 00:19:26.764 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:27.023 [2024-12-10 14:21:27.610481] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.023 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:27.282 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:27.282 [2024-12-10 14:21:27.975399] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:27.282 [2024-12-10 14:21:27.975603] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.282 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:27.541 malloc0 00:19:27.541 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:27.800 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XSkJgbsAxU 00:19:27.800 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.058 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.XSkJgbsAxU 00:19:40.265 Initializing NVMe Controllers 00:19:40.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:40.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:40.265 Initialization complete. Launching workers. 00:19:40.265 ======================================================== 00:19:40.265 Latency(us) 00:19:40.265 Device Information : IOPS MiB/s Average min max 00:19:40.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16960.82 66.25 3773.48 829.53 4540.73 00:19:40.265 ======================================================== 00:19:40.265 Total : 16960.82 66.25 3773.48 829.53 4540.73 00:19:40.265 00:19:40.265 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XSkJgbsAxU 00:19:40.265 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:40.265 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:40.265 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:40.265 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XSkJgbsAxU 00:19:40.266 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.266 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1660228 00:19:40.266 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:40.266 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1660228 /var/tmp/bdevperf.sock 00:19:40.266 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1660228 ']' 00:19:40.266 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.266 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.266 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.266 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.266 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.266 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:40.266 [2024-12-10 14:21:38.869090] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:19:40.266 [2024-12-10 14:21:38.869142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660228 ] 00:19:40.266 [2024-12-10 14:21:38.930394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.266 [2024-12-10 14:21:38.970586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.266 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.266 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:40.266 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XSkJgbsAxU 00:19:40.266 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:40.266 [2024-12-10 14:21:39.409769] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.266 TLSTESTn1 00:19:40.266 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:40.266 Running I/O for 10 seconds... 00:19:41.200 5461.00 IOPS, 21.33 MiB/s [2024-12-10T13:21:42.874Z] 5551.50 IOPS, 21.69 MiB/s [2024-12-10T13:21:43.809Z] 5579.33 IOPS, 21.79 MiB/s [2024-12-10T13:21:44.743Z] 5600.75 IOPS, 21.88 MiB/s [2024-12-10T13:21:45.680Z] 5630.40 IOPS, 21.99 MiB/s [2024-12-10T13:21:46.614Z] 5634.67 IOPS, 22.01 MiB/s [2024-12-10T13:21:47.988Z] 5641.57 IOPS, 22.04 MiB/s [2024-12-10T13:21:48.921Z] 5633.75 IOPS, 22.01 MiB/s [2024-12-10T13:21:49.856Z] 5566.78 IOPS, 21.75 MiB/s [2024-12-10T13:21:49.856Z] 5519.00 IOPS, 21.56 MiB/s 00:19:49.116 Latency(us) 00:19:49.116 [2024-12-10T13:21:49.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.116 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:49.116 Verification LBA range: start 0x0 length 0x2000 00:19:49.116 TLSTESTn1 : 10.02 5522.29 21.57 0.00 0.00 23143.68 6740.85 26963.38 00:19:49.116 [2024-12-10T13:21:49.856Z] =================================================================================================================== 00:19:49.116 [2024-12-10T13:21:49.856Z] Total : 5522.29 21.57 0.00 0.00 23143.68 6740.85 26963.38 00:19:49.116 { 00:19:49.116 "results": [ 00:19:49.116 { 00:19:49.116 "job": "TLSTESTn1", 00:19:49.116 "core_mask": "0x4", 00:19:49.116 "workload": "verify", 00:19:49.116 "status": "finished", 00:19:49.116 "verify_range": { 00:19:49.116 "start": 0, 00:19:49.116 "length": 8192 00:19:49.116 }, 00:19:49.116 "queue_depth": 128, 00:19:49.116 "io_size": 4096, 00:19:49.116 "runtime": 10.017032, 00:19:49.116 "iops": 5522.294428130009, 00:19:49.116 "mibps": 21.571462609882847, 00:19:49.116 "io_failed": 0, 00:19:49.116 "io_timeout": 0, 00:19:49.116 "avg_latency_us": 23143.684697290162, 00:19:49.116 "min_latency_us": 6740.845714285714, 00:19:49.116 "max_latency_us": 26963.382857142857 00:19:49.116 } 00:19:49.116 ], 00:19:49.116 "core_count": 1 00:19:49.116 } 00:19:49.116 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:49.116 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1660228 00:19:49.116 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1660228 ']' 00:19:49.116 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1660228 00:19:49.116 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:49.116 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.116 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1660228 00:19:49.116 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:49.116 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:49.116 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1660228' 00:19:49.116 killing process with pid 1660228 00:19:49.116 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1660228 00:19:49.116 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.116 00:19:49.116 Latency(us) 00:19:49.116 [2024-12-10T13:21:49.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.116 [2024-12-10T13:21:49.856Z] =================================================================================================================== 00:19:49.116 [2024-12-10T13:21:49.856Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:49.116 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1660228 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LuHqK89Fd8 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LuHqK89Fd8 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LuHqK89Fd8 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LuHqK89Fd8 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1661915 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1661915 /var/tmp/bdevperf.sock 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1661915 ']' 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.375 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.375 [2024-12-10 14:21:49.913523] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:19:49.375 [2024-12-10 14:21:49.913570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661915 ] 00:19:49.376 [2024-12-10 14:21:49.990584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.376 [2024-12-10 14:21:50.036778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.634 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.634 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:49.634 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LuHqK89Fd8 00:19:49.634 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:49.892 [2024-12-10 14:21:50.469612] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.892 [2024-12-10 14:21:50.478888] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:49.892 [2024-12-10 14:21:50.479206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1700 (107): Transport endpoint is not connected 00:19:49.892 [2024-12-10 14:21:50.479913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1700 (9): Bad file descriptor 00:19:49.892 [2024-12-10 14:21:50.480914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:49.892 [2024-12-10 14:21:50.480924] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:49.892 [2024-12-10 14:21:50.480931] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:49.892 [2024-12-10 14:21:50.480941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:49.892 request: 00:19:49.892 { 00:19:49.892 "name": "TLSTEST", 00:19:49.892 "trtype": "tcp", 00:19:49.892 "traddr": "10.0.0.2", 00:19:49.892 "adrfam": "ipv4", 00:19:49.892 "trsvcid": "4420", 00:19:49.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.892 "prchk_reftag": false, 00:19:49.892 "prchk_guard": false, 00:19:49.892 "hdgst": false, 00:19:49.892 "ddgst": false, 00:19:49.892 "psk": "key0", 00:19:49.892 "allow_unrecognized_csi": false, 00:19:49.892 "method": "bdev_nvme_attach_controller", 00:19:49.892 "req_id": 1 00:19:49.892 } 00:19:49.892 Got JSON-RPC error response 00:19:49.892 response: 00:19:49.892 { 00:19:49.892 "code": -5, 00:19:49.892 "message": "Input/output error" 00:19:49.892 } 00:19:49.892 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1661915 00:19:49.892 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1661915 ']' 00:19:49.892 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1661915 00:19:49.892 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:49.892 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.892 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1661915 00:19:49.892 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:49.892 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:49.892 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1661915' 00:19:49.892 killing process with pid 1661915 00:19:49.892 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1661915 00:19:49.892 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.892 00:19:49.892 Latency(us) 00:19:49.892 [2024-12-10T13:21:50.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.892 [2024-12-10T13:21:50.632Z] =================================================================================================================== 00:19:49.892 [2024-12-10T13:21:50.632Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:49.892 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1661915 00:19:50.150 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:50.150 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:50.150 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:50.150 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:50.150 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:50.150 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XSkJgbsAxU 00:19:50.150 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XSkJgbsAxU 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XSkJgbsAxU 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XSkJgbsAxU 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1662091 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1662091 /var/tmp/bdevperf.sock 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1662091 ']' 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.151 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.151 [2024-12-10 14:21:50.751344] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:19:50.151 [2024-12-10 14:21:50.751390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662091 ] 00:19:50.151 [2024-12-10 14:21:50.828243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.151 [2024-12-10 14:21:50.863930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.408 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.409 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:50.409 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XSkJgbsAxU 00:19:50.666 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:50.666 [2024-12-10 14:21:51.348142] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.666 [2024-12-10 14:21:51.352628] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:50.666 [2024-12-10 14:21:51.352649] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:50.666 [2024-12-10 14:21:51.352672] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:50.666 [2024-12-10 14:21:51.353400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd94700 (107): Transport endpoint is not connected 00:19:50.666 [2024-12-10 14:21:51.354393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd94700 (9): Bad file descriptor 00:19:50.666 [2024-12-10 14:21:51.355394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:50.666 [2024-12-10 14:21:51.355404] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:50.666 [2024-12-10 14:21:51.355412] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:50.666 [2024-12-10 14:21:51.355427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:50.666 request: 00:19:50.666 { 00:19:50.666 "name": "TLSTEST", 00:19:50.666 "trtype": "tcp", 00:19:50.666 "traddr": "10.0.0.2", 00:19:50.666 "adrfam": "ipv4", 00:19:50.666 "trsvcid": "4420", 00:19:50.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.666 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:50.666 "prchk_reftag": false, 00:19:50.666 "prchk_guard": false, 00:19:50.666 "hdgst": false, 00:19:50.666 "ddgst": false, 00:19:50.666 "psk": "key0", 00:19:50.666 "allow_unrecognized_csi": false, 00:19:50.666 "method": "bdev_nvme_attach_controller", 00:19:50.666 "req_id": 1 00:19:50.666 } 00:19:50.666 Got JSON-RPC error response 00:19:50.666 response: 00:19:50.666 { 00:19:50.666 "code": -5, 00:19:50.666 "message": "Input/output error" 00:19:50.666 } 00:19:50.666 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1662091 00:19:50.666 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1662091 ']' 00:19:50.666 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1662091 00:19:50.666 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:50.666 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.666 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1662091 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1662091' 00:19:50.925 killing process with pid 1662091 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1662091 00:19:50.925 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.925 00:19:50.925 Latency(us) 00:19:50.925 [2024-12-10T13:21:51.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.925 [2024-12-10T13:21:51.665Z] =================================================================================================================== 00:19:50.925 [2024-12-10T13:21:51.665Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1662091 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XSkJgbsAxU 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XSkJgbsAxU 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XSkJgbsAxU 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XSkJgbsAxU 00:19:50.925 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.926 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1662322 00:19:50.926 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.926 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.926 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1662322 /var/tmp/bdevperf.sock 00:19:50.926 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1662322 ']' 00:19:50.926 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.926 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.926 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.926 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.926 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.926 [2024-12-10 14:21:51.632683] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:19:50.926 [2024-12-10 14:21:51.632735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662322 ] 00:19:51.184 [2024-12-10 14:21:51.710855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.184 [2024-12-10 14:21:51.746854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.184 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.184 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:51.184 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XSkJgbsAxU 00:19:51.441 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:51.699 [2024-12-10 14:21:52.205767] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.699 [2024-12-10 14:21:52.213157] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:51.699 [2024-12-10 14:21:52.213176] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:51.699 [2024-12-10 14:21:52.213199] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:51.699 [2024-12-10 14:21:52.213986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9eb700 (107): Transport endpoint is not connected 00:19:51.699 [2024-12-10 14:21:52.214980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9eb700 (9): Bad file descriptor 00:19:51.699 [2024-12-10 14:21:52.215982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:51.699 [2024-12-10 14:21:52.215994] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:51.699 [2024-12-10 14:21:52.216002] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:51.699 [2024-12-10 14:21:52.216013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:51.699 request: 00:19:51.699 { 00:19:51.699 "name": "TLSTEST", 00:19:51.699 "trtype": "tcp", 00:19:51.699 "traddr": "10.0.0.2", 00:19:51.699 "adrfam": "ipv4", 00:19:51.699 "trsvcid": "4420", 00:19:51.699 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:51.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.699 "prchk_reftag": false, 00:19:51.699 "prchk_guard": false, 00:19:51.699 "hdgst": false, 00:19:51.699 "ddgst": false, 00:19:51.699 "psk": "key0", 00:19:51.699 "allow_unrecognized_csi": false, 00:19:51.699 "method": "bdev_nvme_attach_controller", 00:19:51.699 "req_id": 1 00:19:51.699 } 00:19:51.699 Got JSON-RPC error response 00:19:51.699 response: 00:19:51.699 { 00:19:51.699 "code": -5, 00:19:51.699 "message": "Input/output error" 00:19:51.699 } 00:19:51.699 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1662322 00:19:51.700 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1662322 ']' 00:19:51.700 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1662322 00:19:51.700 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:51.700 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.700 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1662322 00:19:51.700 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:51.700 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:51.700 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1662322' 00:19:51.700 killing process with pid 1662322 00:19:51.700 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1662322 00:19:51.700 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.700 00:19:51.700 Latency(us) 00:19:51.700 [2024-12-10T13:21:52.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.700 [2024-12-10T13:21:52.440Z] =================================================================================================================== 00:19:51.700 [2024-12-10T13:21:52.440Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.700 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1662322 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1662343 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1662343 /var/tmp/bdevperf.sock 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1662343 ']' 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.958 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.958 [2024-12-10 14:21:52.494557] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:19:51.958 [2024-12-10 14:21:52.494605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662343 ] 00:19:51.958 [2024-12-10 14:21:52.564350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.958 [2024-12-10 14:21:52.600931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.216 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.216 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:52.216 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:52.216 [2024-12-10 14:21:52.879909] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:52.216 [2024-12-10 14:21:52.879942] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:52.216 request: 00:19:52.216 { 00:19:52.216 "name": "key0", 00:19:52.216 "path": "", 00:19:52.216 "method": "keyring_file_add_key", 00:19:52.216 "req_id": 1 00:19:52.216 } 00:19:52.216 Got JSON-RPC error response 00:19:52.216 response: 00:19:52.216 { 00:19:52.216 "code": -1, 00:19:52.216 "message": "Operation not permitted" 00:19:52.216 } 00:19:52.216 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:52.474 [2024-12-10 14:21:53.072487] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:52.474 [2024-12-10 14:21:53.072521] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:52.474 request: 00:19:52.474 { 00:19:52.474 "name": "TLSTEST", 00:19:52.474 "trtype": "tcp", 00:19:52.474 "traddr": "10.0.0.2", 00:19:52.474 "adrfam": "ipv4", 00:19:52.474 "trsvcid": "4420", 00:19:52.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.474 "prchk_reftag": false, 00:19:52.474 "prchk_guard": false, 00:19:52.474 "hdgst": false, 00:19:52.474 "ddgst": false, 00:19:52.474 "psk": "key0", 00:19:52.474 "allow_unrecognized_csi": false, 00:19:52.474 "method": "bdev_nvme_attach_controller", 00:19:52.474 "req_id": 1 00:19:52.474 } 00:19:52.474 Got JSON-RPC error response 00:19:52.474 response: 00:19:52.474 { 00:19:52.474 "code": -126, 00:19:52.474 "message": "Required key not available" 00:19:52.474 } 00:19:52.474 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1662343 00:19:52.474 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1662343 ']' 00:19:52.474 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1662343 00:19:52.474 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:52.474 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.474 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1662343 00:19:52.474 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:52.474 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:52.474 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1662343' 00:19:52.474 killing process with pid 1662343 00:19:52.474 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1662343 00:19:52.474 Received shutdown signal, test time was about 10.000000 seconds 00:19:52.474 00:19:52.474 Latency(us) 00:19:52.474 [2024-12-10T13:21:53.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.474 [2024-12-10T13:21:53.214Z] =================================================================================================================== 00:19:52.474 [2024-12-10T13:21:53.214Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:52.474 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1662343 00:19:52.733 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:52.733 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:52.733 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.733 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.733 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.733 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1657743 00:19:52.733 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1657743 ']' 00:19:52.733 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1657743 00:19:52.733 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:52.733 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.733 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1657743 00:19:52.733 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:52.733 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:52.733 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1657743' 00:19:52.733 killing process with pid 1657743 00:19:52.733 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1657743 00:19:52.733 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1657743 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.DeTIfxVeJ5 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.DeTIfxVeJ5 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1662585 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1662585 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1662585 ']' 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.991 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.991 [2024-12-10 14:21:53.621066] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:19:52.991 [2024-12-10 14:21:53.621114] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.991 [2024-12-10 14:21:53.701700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.250 [2024-12-10 14:21:53.741352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.250 [2024-12-10 14:21:53.741383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.250 [2024-12-10 14:21:53.741390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.250 [2024-12-10 14:21:53.741396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.250 [2024-12-10 14:21:53.741401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.250 [2024-12-10 14:21:53.741919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.250 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.250 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:53.250 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.250 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.250 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.250 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.250 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.DeTIfxVeJ5 00:19:53.250 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DeTIfxVeJ5 00:19:53.250 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:53.508 [2024-12-10 14:21:54.049695] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.508 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:53.766 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:53.766 [2024-12-10 14:21:54.450725] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.766 [2024-12-10 14:21:54.450938] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.766 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:54.025 malloc0 00:19:54.025 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:54.283 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DeTIfxVeJ5 00:19:54.539 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:54.540 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DeTIfxVeJ5 00:19:54.540 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:54.540 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:54.540 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:54.540 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DeTIfxVeJ5 00:19:54.540 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:54.540 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:54.540 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1662834 00:19:54.540 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.540 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1662834 /var/tmp/bdevperf.sock 00:19:54.540 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1662834 ']' 00:19:54.540 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.540 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.540 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.540 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.540 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.797 [2024-12-10 14:21:55.289002] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:19:54.797 [2024-12-10 14:21:55.289053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662834 ] 00:19:54.797 [2024-12-10 14:21:55.370864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.797 [2024-12-10 14:21:55.409940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.797 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.797 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:54.797 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DeTIfxVeJ5 00:19:55.054 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:55.311 [2024-12-10 14:21:55.873940] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.311 TLSTESTn1 00:19:55.311 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:55.569 Running I/O for 10 seconds... 00:19:57.434 5452.00 IOPS, 21.30 MiB/s [2024-12-10T13:21:59.106Z] 5548.00 IOPS, 21.67 MiB/s [2024-12-10T13:22:00.478Z] 5606.00 IOPS, 21.90 MiB/s [2024-12-10T13:22:01.411Z] 5573.50 IOPS, 21.77 MiB/s [2024-12-10T13:22:02.343Z] 5580.00 IOPS, 21.80 MiB/s [2024-12-10T13:22:03.277Z] 5563.83 IOPS, 21.73 MiB/s [2024-12-10T13:22:04.213Z] 5540.43 IOPS, 21.64 MiB/s [2024-12-10T13:22:05.275Z] 5559.25 IOPS, 21.72 MiB/s [2024-12-10T13:22:06.209Z] 5541.22 IOPS, 21.65 MiB/s [2024-12-10T13:22:06.209Z] 5549.80 IOPS, 21.68 MiB/s 00:20:05.469 Latency(us) 00:20:05.469 [2024-12-10T13:22:06.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.469 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:05.469 Verification LBA range: start 0x0 length 0x2000 00:20:05.469 TLSTESTn1 : 10.01 5555.27 21.70 0.00 0.00 23007.85 5398.92 23967.45 00:20:05.469 [2024-12-10T13:22:06.209Z] =================================================================================================================== 00:20:05.469 [2024-12-10T13:22:06.209Z] Total : 5555.27 21.70 0.00 0.00 23007.85 5398.92 23967.45 00:20:05.469 { 00:20:05.469 "results": [ 00:20:05.469 { 00:20:05.469 "job": "TLSTESTn1", 00:20:05.469 "core_mask": "0x4", 00:20:05.469 "workload": "verify", 00:20:05.469 "status": "finished", 00:20:05.469 "verify_range": { 00:20:05.469 "start": 0, 00:20:05.469 "length": 8192 00:20:05.469 }, 00:20:05.469 "queue_depth": 128, 00:20:05.469 "io_size": 4096, 00:20:05.469 "runtime": 10.013011, 00:20:05.469 "iops": 5555.27203555454, 00:20:05.469 "mibps": 21.70028138888492, 00:20:05.469 "io_failed": 0, 00:20:05.469 "io_timeout": 0, 00:20:05.469 "avg_latency_us": 23007.85038423114, 00:20:05.469 "min_latency_us": 5398.918095238095, 00:20:05.469 "max_latency_us": 23967.45142857143 00:20:05.469 } 00:20:05.469 ], 00:20:05.469 "core_count": 1 00:20:05.469 } 00:20:05.469 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:05.469 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1662834 00:20:05.469 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1662834 ']' 00:20:05.469 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1662834 00:20:05.470 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:05.470 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.470 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1662834 00:20:05.470 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:05.470 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:05.470 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1662834' 00:20:05.470 killing process with pid 1662834 00:20:05.470 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1662834 00:20:05.470 Received shutdown signal, test time was about 10.000000 seconds 00:20:05.470 00:20:05.470 Latency(us) 00:20:05.470 [2024-12-10T13:22:06.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.470 [2024-12-10T13:22:06.210Z] =================================================================================================================== 00:20:05.470 [2024-12-10T13:22:06.210Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.470 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1662834 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.DeTIfxVeJ5 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DeTIfxVeJ5 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DeTIfxVeJ5 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DeTIfxVeJ5 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DeTIfxVeJ5 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1664650 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1664650 /var/tmp/bdevperf.sock 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1664650 ']' 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.728 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.728 [2024-12-10 14:22:06.371900] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:20:05.728 [2024-12-10 14:22:06.371948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664650 ] 00:20:05.728 [2024-12-10 14:22:06.449471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.986 [2024-12-10 14:22:06.488897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.986 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.986 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:05.986 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DeTIfxVeJ5 00:20:06.244 [2024-12-10 14:22:06.767887] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DeTIfxVeJ5': 0100666 00:20:06.244 [2024-12-10 14:22:06.767918] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:06.244 request: 00:20:06.244 { 00:20:06.244 "name": "key0", 00:20:06.244 "path": "/tmp/tmp.DeTIfxVeJ5", 00:20:06.244 "method": "keyring_file_add_key", 00:20:06.244 "req_id": 1 00:20:06.244 } 00:20:06.244 Got JSON-RPC error response 00:20:06.244 response: 00:20:06.244 { 00:20:06.244 "code": -1, 00:20:06.244 "message": "Operation not permitted" 00:20:06.244 } 00:20:06.244 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:06.503 [2024-12-10 14:22:06.984523] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.503 [2024-12-10 14:22:06.984557] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:06.503 request: 00:20:06.503 { 00:20:06.503 "name": "TLSTEST", 00:20:06.503 "trtype": "tcp", 00:20:06.503 "traddr": "10.0.0.2", 00:20:06.503 "adrfam": "ipv4", 00:20:06.503 "trsvcid": "4420", 00:20:06.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.503 "prchk_reftag": false, 00:20:06.503 "prchk_guard": false, 00:20:06.503 "hdgst": false, 00:20:06.503 "ddgst": false, 00:20:06.503 "psk": "key0", 00:20:06.503 "allow_unrecognized_csi": false, 00:20:06.503 "method": "bdev_nvme_attach_controller", 00:20:06.503 "req_id": 1 00:20:06.503 } 00:20:06.503 Got JSON-RPC error response 00:20:06.503 response: 00:20:06.503 { 00:20:06.503 "code": -126, 00:20:06.503 "message": "Required key not available" 00:20:06.503 } 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1664650 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1664650 ']' 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1664650 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1664650 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1664650' 00:20:06.503 killing process with pid 1664650 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1664650 00:20:06.503 Received shutdown signal, test time was about 10.000000 seconds 00:20:06.503 00:20:06.503 Latency(us) 00:20:06.503 [2024-12-10T13:22:07.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.503 [2024-12-10T13:22:07.243Z] =================================================================================================================== 00:20:06.503 [2024-12-10T13:22:07.243Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1664650 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1662585 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1662585 ']' 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1662585 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.503 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1662585 00:20:06.762 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:06.762 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:06.762 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1662585' 00:20:06.762 killing process with pid 1662585 00:20:06.762 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1662585 00:20:06.762 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1662585 00:20:06.762 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:06.762 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:06.762 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:06.762 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.762 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1664888 00:20:06.762 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:06.762 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1664888 00:20:06.762 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1664888 ']' 00:20:06.762 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.762 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.762 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.762 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.763 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.763 [2024-12-10 14:22:07.483100] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:20:06.763 [2024-12-10 14:22:07.483147] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.021 [2024-12-10 14:22:07.568286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.021 [2024-12-10 14:22:07.607651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.021 [2024-12-10 14:22:07.607688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.021 [2024-12-10 14:22:07.607695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.021 [2024-12-10 14:22:07.607701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.021 [2024-12-10 14:22:07.607706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.021 [2024-12-10 14:22:07.608243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.021 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.021 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:07.021 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.021 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.021 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.021 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.021 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.DeTIfxVeJ5 00:20:07.021 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:07.021 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.DeTIfxVeJ5 00:20:07.021 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:07.021 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.021 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:07.021 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.021 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.DeTIfxVeJ5 00:20:07.021 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DeTIfxVeJ5 00:20:07.021 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:07.279 [2024-12-10 14:22:07.920618] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.279 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:07.538 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:07.796 [2024-12-10 14:22:08.337684] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:07.796 [2024-12-10 14:22:08.337892] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.796 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:07.796 malloc0 00:20:08.055 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:08.055 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DeTIfxVeJ5 00:20:08.313 [2024-12-10 14:22:08.923226] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DeTIfxVeJ5': 0100666 00:20:08.313 [2024-12-10 14:22:08.923254] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:08.313 request: 00:20:08.313 { 00:20:08.313 "name": "key0", 00:20:08.313 "path": "/tmp/tmp.DeTIfxVeJ5", 00:20:08.313 "method": "keyring_file_add_key", 00:20:08.313 "req_id": 1 00:20:08.313 } 00:20:08.313 Got JSON-RPC error response 00:20:08.313 response: 00:20:08.313 { 00:20:08.313 "code": -1, 00:20:08.313 "message": "Operation not permitted" 00:20:08.313 } 00:20:08.313 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:08.572 [2024-12-10 14:22:09.119748] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:08.572 [2024-12-10 14:22:09.119779] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:08.572 request: 00:20:08.572 { 00:20:08.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.572 "host": "nqn.2016-06.io.spdk:host1", 00:20:08.572 "psk": "key0", 00:20:08.572 "method": "nvmf_subsystem_add_host", 00:20:08.572 "req_id": 1 00:20:08.572 } 00:20:08.572 Got JSON-RPC error response 00:20:08.572 response: 00:20:08.572 { 00:20:08.572 "code": -32603, 00:20:08.572 "message": "Internal error" 00:20:08.572 } 00:20:08.572 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:08.572 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:08.572 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:08.572 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:08.572 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1664888 00:20:08.572 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1664888 ']' 00:20:08.572 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1664888 00:20:08.572 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:08.572 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.572 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1664888 00:20:08.572 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:08.572 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:08.572 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1664888' 00:20:08.572 killing process with pid 1664888 00:20:08.572 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1664888 00:20:08.572 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1664888 00:20:08.831 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.DeTIfxVeJ5 00:20:08.831 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:08.831 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:08.831 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.831 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.831 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1665160 00:20:08.831 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:08.831 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1665160 00:20:08.831 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1665160 ']' 00:20:08.831 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.831 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.831 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.831 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.831 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.831 [2024-12-10 14:22:09.428050] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:20:08.831 [2024-12-10 14:22:09.428097] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.831 [2024-12-10 14:22:09.510734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.831 [2024-12-10 14:22:09.550046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.831 [2024-12-10 14:22:09.550088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.831 [2024-12-10 14:22:09.550094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.831 [2024-12-10 14:22:09.550100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.831 [2024-12-10 14:22:09.550105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.831 [2024-12-10 14:22:09.550649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.090 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.090 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:09.090 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:09.090 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:09.090 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.090 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.090 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.DeTIfxVeJ5 00:20:09.090 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DeTIfxVeJ5 00:20:09.090 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:09.349 [2024-12-10 14:22:09.863618] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.349 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:09.608 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:09.608 [2024-12-10 14:22:10.268657] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.608 [2024-12-10 14:22:10.268874] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.608 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:09.866 malloc0 00:20:09.866 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:10.124 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DeTIfxVeJ5 00:20:10.382 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:10.382 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:10.382 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1665596 00:20:10.382 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:10.382 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1665596 /var/tmp/bdevperf.sock 00:20:10.382 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1665596 ']' 00:20:10.382 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.382 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.382 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.382 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.382 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.641 [2024-12-10 14:22:11.136777] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:20:10.641 [2024-12-10 14:22:11.136829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665596 ] 00:20:10.641 [2024-12-10 14:22:11.215130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.641 [2024-12-10 14:22:11.256462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.641 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.641 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:10.641 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DeTIfxVeJ5 00:20:10.899 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:11.157 [2024-12-10 14:22:11.708754] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.157 TLSTESTn1 00:20:11.157 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:11.416 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:11.416 "subsystems": [ 00:20:11.416 { 00:20:11.416 "subsystem": "keyring", 00:20:11.416 "config": [ 00:20:11.416 { 00:20:11.416 "method": "keyring_file_add_key", 00:20:11.416 "params": { 00:20:11.416 "name": "key0", 00:20:11.416 "path": "/tmp/tmp.DeTIfxVeJ5" 00:20:11.416 } 00:20:11.416 } 00:20:11.416 ] 00:20:11.416 }, 00:20:11.416 { 00:20:11.416 "subsystem": "iobuf", 00:20:11.416 "config": [ 00:20:11.416 { 00:20:11.416 "method": "iobuf_set_options", 00:20:11.416 "params": { 00:20:11.416 "small_pool_count": 8192, 00:20:11.416 "large_pool_count": 1024, 00:20:11.416 "small_bufsize": 8192, 00:20:11.416 "large_bufsize": 135168, 00:20:11.416 "enable_numa": false 00:20:11.416 } 00:20:11.416 } 00:20:11.416 ] 00:20:11.416 }, 00:20:11.416 { 00:20:11.416 "subsystem": "sock", 00:20:11.416 "config": [ 00:20:11.416 { 00:20:11.416 "method": "sock_set_default_impl", 00:20:11.416 "params": { 00:20:11.416 "impl_name": "posix" 00:20:11.416 } 00:20:11.416 }, 00:20:11.416 { 00:20:11.416 "method": "sock_impl_set_options", 00:20:11.416 "params": { 00:20:11.416 "impl_name": "ssl", 00:20:11.416 "recv_buf_size": 4096, 00:20:11.416 "send_buf_size": 4096, 00:20:11.416 "enable_recv_pipe": true, 00:20:11.416 "enable_quickack": false, 00:20:11.416 "enable_placement_id": 0, 00:20:11.416 "enable_zerocopy_send_server": true, 00:20:11.416 "enable_zerocopy_send_client": false, 00:20:11.416 "zerocopy_threshold": 0, 00:20:11.416 "tls_version": 0, 00:20:11.416 "enable_ktls": false 00:20:11.416 } 00:20:11.416 }, 00:20:11.416 { 00:20:11.416 "method": "sock_impl_set_options", 00:20:11.416 "params": { 00:20:11.416 "impl_name": "posix", 00:20:11.416 "recv_buf_size": 2097152, 00:20:11.416 "send_buf_size": 2097152, 00:20:11.416 "enable_recv_pipe": true, 00:20:11.416 "enable_quickack": false, 00:20:11.416 "enable_placement_id": 0, 00:20:11.416 "enable_zerocopy_send_server": true, 00:20:11.416 "enable_zerocopy_send_client": false, 00:20:11.416 "zerocopy_threshold": 0, 00:20:11.416 "tls_version": 0, 00:20:11.416 "enable_ktls": false 00:20:11.416 } 00:20:11.416 } 00:20:11.416 ] 00:20:11.416 }, 00:20:11.416 { 00:20:11.416 "subsystem": "vmd", 00:20:11.416 "config": [] 00:20:11.416 }, 00:20:11.416 { 00:20:11.416 "subsystem": "accel", 00:20:11.416 "config": [ 00:20:11.416 { 00:20:11.416 "method": "accel_set_options", 00:20:11.416 "params": { 00:20:11.416 "small_cache_size": 128, 00:20:11.416 "large_cache_size": 16, 00:20:11.416 "task_count": 2048, 00:20:11.416 "sequence_count": 2048, 00:20:11.416 "buf_count": 2048 00:20:11.416 } 00:20:11.416 } 00:20:11.416 ] 00:20:11.416 }, 00:20:11.416 { 00:20:11.416 "subsystem": "bdev", 00:20:11.416 "config": [ 00:20:11.416 { 00:20:11.416 "method": "bdev_set_options", 00:20:11.416 "params": { 00:20:11.416 "bdev_io_pool_size": 65535, 00:20:11.416 "bdev_io_cache_size": 256, 00:20:11.416 "bdev_auto_examine": true, 00:20:11.416 "iobuf_small_cache_size": 128, 00:20:11.416 "iobuf_large_cache_size": 16 00:20:11.416 } 00:20:11.416 }, 00:20:11.416 { 00:20:11.416 "method": "bdev_raid_set_options", 00:20:11.416 "params": { 00:20:11.416 "process_window_size_kb": 1024, 00:20:11.416 "process_max_bandwidth_mb_sec": 0 00:20:11.416 } 00:20:11.416 }, 00:20:11.416 { 00:20:11.416 "method": "bdev_iscsi_set_options", 00:20:11.416 "params": { 00:20:11.416 "timeout_sec": 30 00:20:11.416 } 00:20:11.416 }, 00:20:11.416 { 00:20:11.416 "method": "bdev_nvme_set_options", 00:20:11.416 "params": { 00:20:11.416 "action_on_timeout": "none", 00:20:11.416 "timeout_us": 0, 00:20:11.416 "timeout_admin_us": 0, 00:20:11.416 "keep_alive_timeout_ms": 10000, 00:20:11.416 "arbitration_burst": 0, 00:20:11.416 "low_priority_weight": 0, 00:20:11.416 "medium_priority_weight": 0, 00:20:11.416 "high_priority_weight": 0, 00:20:11.416 "nvme_adminq_poll_period_us": 10000, 00:20:11.416 "nvme_ioq_poll_period_us": 0, 00:20:11.416 "io_queue_requests": 0, 00:20:11.416 "delay_cmd_submit": true, 00:20:11.416 "transport_retry_count": 4, 00:20:11.416 "bdev_retry_count": 3, 00:20:11.416 "transport_ack_timeout": 0, 00:20:11.417 "ctrlr_loss_timeout_sec": 0, 00:20:11.417 "reconnect_delay_sec": 0, 00:20:11.417 "fast_io_fail_timeout_sec": 0, 00:20:11.417 "disable_auto_failback": false, 00:20:11.417 "generate_uuids": false, 00:20:11.417 "transport_tos": 0, 00:20:11.417 "nvme_error_stat": false, 00:20:11.417 "rdma_srq_size": 0, 00:20:11.417 "io_path_stat": false, 00:20:11.417 "allow_accel_sequence": false, 00:20:11.417 "rdma_max_cq_size": 0, 00:20:11.417 "rdma_cm_event_timeout_ms": 0, 00:20:11.417 "dhchap_digests": [ 00:20:11.417 "sha256", 00:20:11.417 "sha384", 00:20:11.417 "sha512" 00:20:11.417 ], 00:20:11.417 "dhchap_dhgroups": [ 00:20:11.417 "null", 00:20:11.417 "ffdhe2048", 00:20:11.417 "ffdhe3072", 00:20:11.417 "ffdhe4096", 00:20:11.417 "ffdhe6144", 00:20:11.417 "ffdhe8192" 00:20:11.417 ] 00:20:11.417 } 00:20:11.417 }, 00:20:11.417 { 00:20:11.417 "method": "bdev_nvme_set_hotplug", 00:20:11.417 "params": { 00:20:11.417 "period_us": 100000, 00:20:11.417 "enable": false 00:20:11.417 } 00:20:11.417 }, 00:20:11.417 { 00:20:11.417 "method": "bdev_malloc_create", 00:20:11.417 "params": { 00:20:11.417 "name": "malloc0", 00:20:11.417 "num_blocks": 8192, 00:20:11.417 "block_size": 4096, 00:20:11.417 "physical_block_size": 4096, 00:20:11.417 "uuid": "a74b62c9-2842-43dd-8215-746f34695cf2", 00:20:11.417 "optimal_io_boundary": 0, 00:20:11.417 "md_size": 0, 00:20:11.417 "dif_type": 0, 00:20:11.417 "dif_is_head_of_md": false, 00:20:11.417 "dif_pi_format": 0 00:20:11.417 } 00:20:11.417 }, 00:20:11.417 { 00:20:11.417 "method": "bdev_wait_for_examine" 00:20:11.417 } 00:20:11.417 ] 00:20:11.417 }, 00:20:11.417 { 00:20:11.417 "subsystem": "nbd", 00:20:11.417 "config": [] 00:20:11.417 }, 00:20:11.417 { 00:20:11.417 "subsystem": "scheduler", 00:20:11.417 "config": [ 00:20:11.417 { 00:20:11.417 "method": "framework_set_scheduler", 00:20:11.417 "params": { 00:20:11.417 "name": "static" 00:20:11.417 } 00:20:11.417 } 00:20:11.417 ] 00:20:11.417 }, 00:20:11.417 { 00:20:11.417 "subsystem": "nvmf", 00:20:11.417 "config": [ 00:20:11.417 { 00:20:11.417 "method": "nvmf_set_config", 00:20:11.417 "params": { 00:20:11.417 "discovery_filter": "match_any", 00:20:11.417 "admin_cmd_passthru": { 00:20:11.417 "identify_ctrlr": false 00:20:11.417 }, 00:20:11.417 "dhchap_digests": [ 00:20:11.417 "sha256", 00:20:11.417 "sha384", 00:20:11.417 "sha512" 00:20:11.417 ], 00:20:11.417 "dhchap_dhgroups": [ 00:20:11.417 "null", 00:20:11.417 "ffdhe2048", 00:20:11.417 "ffdhe3072", 00:20:11.417 "ffdhe4096", 00:20:11.417 "ffdhe6144", 00:20:11.417 "ffdhe8192" 00:20:11.417 ] 00:20:11.417 } 00:20:11.417 }, 00:20:11.417 { 00:20:11.417 "method": "nvmf_set_max_subsystems", 00:20:11.417 "params": { 00:20:11.417 "max_subsystems": 1024 00:20:11.417 } 00:20:11.417 }, 00:20:11.417 { 00:20:11.417 "method": "nvmf_set_crdt", 00:20:11.417 "params": { 00:20:11.417 "crdt1": 0, 00:20:11.417 "crdt2": 0, 00:20:11.417 "crdt3": 0 00:20:11.417 } 00:20:11.417 }, 00:20:11.417 { 00:20:11.417 "method": "nvmf_create_transport", 00:20:11.417 "params": { 00:20:11.417 "trtype": "TCP", 00:20:11.417 "max_queue_depth": 128, 00:20:11.417 "max_io_qpairs_per_ctrlr": 127, 00:20:11.417 "in_capsule_data_size": 4096, 00:20:11.417 "max_io_size": 131072, 00:20:11.417 "io_unit_size": 131072, 00:20:11.417 "max_aq_depth": 128, 00:20:11.417 "num_shared_buffers": 511, 00:20:11.417 "buf_cache_size": 4294967295, 00:20:11.417 "dif_insert_or_strip": false, 00:20:11.417 "zcopy": false, 00:20:11.417 "c2h_success": false, 00:20:11.417 "sock_priority": 0, 00:20:11.417 "abort_timeout_sec": 1, 00:20:11.417 "ack_timeout": 0, 00:20:11.417 "data_wr_pool_size": 0 00:20:11.417 } 00:20:11.417 }, 00:20:11.417 { 00:20:11.417 "method": "nvmf_create_subsystem", 00:20:11.417 "params": { 00:20:11.417 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.417 "allow_any_host": false, 00:20:11.417 "serial_number": "SPDK00000000000001", 00:20:11.417 "model_number": "SPDK bdev Controller", 00:20:11.417 "max_namespaces": 10, 00:20:11.417 "min_cntlid": 1, 00:20:11.417 "max_cntlid": 65519, 00:20:11.417 "ana_reporting": false 00:20:11.417 } 00:20:11.417 }, 00:20:11.417 { 00:20:11.417 "method": "nvmf_subsystem_add_host", 00:20:11.417 "params": { 00:20:11.417 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.417 "host": "nqn.2016-06.io.spdk:host1", 00:20:11.417 "psk": "key0" 00:20:11.417 } 00:20:11.417 }, 00:20:11.417 { 00:20:11.417 "method": "nvmf_subsystem_add_ns", 00:20:11.417 "params": { 00:20:11.417 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.417 "namespace": { 00:20:11.417 "nsid": 1, 00:20:11.417 "bdev_name": "malloc0", 00:20:11.417 "nguid": "A74B62C9284243DD8215746F34695CF2", 00:20:11.417 "uuid": "a74b62c9-2842-43dd-8215-746f34695cf2", 00:20:11.417 "no_auto_visible": false 00:20:11.417 } 00:20:11.417 } 00:20:11.417 }, 00:20:11.417 { 00:20:11.417 "method": "nvmf_subsystem_add_listener", 00:20:11.417 "params": { 00:20:11.417 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.417 "listen_address": { 00:20:11.417 "trtype": "TCP", 00:20:11.417 "adrfam": "IPv4", 00:20:11.417 "traddr": "10.0.0.2", 00:20:11.417 "trsvcid": "4420" 00:20:11.417 }, 00:20:11.417 "secure_channel": true 00:20:11.417 } 00:20:11.417 } 00:20:11.417 ] 00:20:11.417 } 00:20:11.417 ] 00:20:11.417 }' 00:20:11.417 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:11.676 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:11.676 "subsystems": [ 00:20:11.676 { 00:20:11.676 "subsystem": "keyring", 00:20:11.676 "config": [ 00:20:11.676 { 00:20:11.676 "method": "keyring_file_add_key", 00:20:11.676 "params": { 00:20:11.676 "name": "key0", 00:20:11.676 "path": "/tmp/tmp.DeTIfxVeJ5" 00:20:11.676 } 00:20:11.676 } 00:20:11.676 ] 00:20:11.676 }, 00:20:11.676 { 00:20:11.676 "subsystem": "iobuf", 00:20:11.676 "config": [ 00:20:11.676 { 00:20:11.676 "method": "iobuf_set_options", 00:20:11.676 "params": { 00:20:11.676 "small_pool_count": 8192, 00:20:11.676 "large_pool_count": 1024, 00:20:11.676 "small_bufsize": 8192, 00:20:11.676 "large_bufsize": 135168, 00:20:11.676 "enable_numa": false 00:20:11.676 } 00:20:11.676 } 00:20:11.676 ] 00:20:11.676 }, 00:20:11.676 { 00:20:11.676 "subsystem": "sock", 00:20:11.676 "config": [ 00:20:11.676 { 00:20:11.676 "method": "sock_set_default_impl", 00:20:11.676 "params": { 00:20:11.676 "impl_name": "posix" 00:20:11.676 } 00:20:11.676 }, 00:20:11.676 { 00:20:11.676 "method": "sock_impl_set_options", 00:20:11.676 "params": { 00:20:11.676 "impl_name": "ssl", 00:20:11.676 "recv_buf_size": 4096, 00:20:11.676 "send_buf_size": 4096, 00:20:11.676 "enable_recv_pipe": true, 00:20:11.676 "enable_quickack": false, 00:20:11.676 "enable_placement_id": 0, 00:20:11.676 "enable_zerocopy_send_server": true, 00:20:11.676 "enable_zerocopy_send_client": false, 00:20:11.676 "zerocopy_threshold": 0, 00:20:11.676 "tls_version": 0, 00:20:11.676 "enable_ktls": false 00:20:11.676 } 00:20:11.676 }, 00:20:11.676 { 00:20:11.676 "method": "sock_impl_set_options", 00:20:11.676 "params": { 00:20:11.676 "impl_name": "posix", 00:20:11.676 "recv_buf_size": 2097152, 00:20:11.676 "send_buf_size": 2097152, 00:20:11.676 "enable_recv_pipe": true, 00:20:11.676 "enable_quickack": false, 00:20:11.676 "enable_placement_id": 0, 00:20:11.676 "enable_zerocopy_send_server": true, 00:20:11.676 "enable_zerocopy_send_client": false, 00:20:11.676 "zerocopy_threshold": 0, 00:20:11.676 "tls_version": 0, 00:20:11.676 "enable_ktls": false 00:20:11.676 } 00:20:11.676 } 00:20:11.676 ] 00:20:11.676 }, 00:20:11.677 { 00:20:11.677 "subsystem": "vmd", 00:20:11.677 "config": [] 00:20:11.677 }, 00:20:11.677 { 00:20:11.677 "subsystem": "accel", 00:20:11.677 "config": [ 00:20:11.677 { 00:20:11.677 "method": "accel_set_options", 00:20:11.677 "params": { 00:20:11.677 "small_cache_size": 128, 00:20:11.677 "large_cache_size": 16, 00:20:11.677 "task_count": 2048, 00:20:11.677 "sequence_count": 2048, 00:20:11.677 "buf_count": 2048 00:20:11.677 } 00:20:11.677 } 00:20:11.677 ] 00:20:11.677 }, 00:20:11.677 { 00:20:11.677 "subsystem": "bdev", 00:20:11.677 "config": [ 00:20:11.677 { 00:20:11.677 "method": "bdev_set_options", 00:20:11.677 "params": { 00:20:11.677 "bdev_io_pool_size": 65535, 00:20:11.677 "bdev_io_cache_size": 256, 00:20:11.677 "bdev_auto_examine": true, 00:20:11.677 "iobuf_small_cache_size": 128, 00:20:11.677 "iobuf_large_cache_size": 16 00:20:11.677 } 00:20:11.677 }, 00:20:11.677 { 00:20:11.677 "method": "bdev_raid_set_options", 00:20:11.677 "params": { 00:20:11.677 "process_window_size_kb": 1024, 00:20:11.677 "process_max_bandwidth_mb_sec": 0 00:20:11.677 } 00:20:11.677 }, 00:20:11.677 { 00:20:11.677 "method": "bdev_iscsi_set_options", 00:20:11.677 "params": { 00:20:11.677 "timeout_sec": 30 00:20:11.677 } 00:20:11.677 }, 00:20:11.677 { 00:20:11.677 "method": "bdev_nvme_set_options", 00:20:11.677 "params": { 00:20:11.677 "action_on_timeout": "none", 00:20:11.677 "timeout_us": 0, 00:20:11.677 "timeout_admin_us": 0, 00:20:11.677 "keep_alive_timeout_ms": 10000, 00:20:11.677 "arbitration_burst": 0, 00:20:11.677 "low_priority_weight": 0, 00:20:11.677 "medium_priority_weight": 0, 00:20:11.677 "high_priority_weight": 0, 00:20:11.677 "nvme_adminq_poll_period_us": 10000, 00:20:11.677 "nvme_ioq_poll_period_us": 0, 00:20:11.677 "io_queue_requests": 512, 00:20:11.677 "delay_cmd_submit": true, 00:20:11.677 "transport_retry_count": 4, 00:20:11.677 "bdev_retry_count": 3, 00:20:11.677 "transport_ack_timeout": 0, 00:20:11.677 "ctrlr_loss_timeout_sec": 0, 00:20:11.677 "reconnect_delay_sec": 0, 00:20:11.677 "fast_io_fail_timeout_sec": 0, 00:20:11.677 "disable_auto_failback": false, 00:20:11.677 "generate_uuids": false, 00:20:11.677 "transport_tos": 0, 00:20:11.677 "nvme_error_stat": false, 00:20:11.677 "rdma_srq_size": 0, 00:20:11.677 "io_path_stat": false, 00:20:11.677 "allow_accel_sequence": false, 00:20:11.677 "rdma_max_cq_size": 0, 00:20:11.677 "rdma_cm_event_timeout_ms": 0, 00:20:11.677 "dhchap_digests": [ 00:20:11.677 "sha256", 00:20:11.677 "sha384", 00:20:11.677 "sha512" 00:20:11.677 ], 00:20:11.677 "dhchap_dhgroups": [ 00:20:11.677 "null", 00:20:11.677 "ffdhe2048", 00:20:11.677 "ffdhe3072", 00:20:11.677 "ffdhe4096", 00:20:11.677 "ffdhe6144", 00:20:11.677 "ffdhe8192" 00:20:11.677 ] 00:20:11.677 } 00:20:11.677 }, 00:20:11.677 { 00:20:11.677 "method": "bdev_nvme_attach_controller", 00:20:11.677 "params": { 00:20:11.677 "name": "TLSTEST", 00:20:11.677 "trtype": "TCP", 00:20:11.677 "adrfam": "IPv4", 00:20:11.677 "traddr": "10.0.0.2", 00:20:11.677 "trsvcid": "4420", 00:20:11.677 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.677 "prchk_reftag": false, 00:20:11.677 "prchk_guard": false, 00:20:11.677 "ctrlr_loss_timeout_sec": 0, 00:20:11.677 "reconnect_delay_sec": 0, 00:20:11.677 "fast_io_fail_timeout_sec": 0, 00:20:11.677 "psk": "key0", 00:20:11.677 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.677 "hdgst": false, 00:20:11.677 "ddgst": false, 00:20:11.677 "multipath": "multipath" 00:20:11.677 } 00:20:11.677 }, 00:20:11.677 { 00:20:11.677 "method": "bdev_nvme_set_hotplug", 00:20:11.677 "params": { 00:20:11.677 "period_us": 100000, 00:20:11.677 "enable": false 00:20:11.677 } 00:20:11.677 }, 00:20:11.677 { 00:20:11.677 "method": "bdev_wait_for_examine" 00:20:11.677 } 00:20:11.677 ] 00:20:11.677 }, 00:20:11.677 { 00:20:11.677 "subsystem": "nbd", 00:20:11.677 "config": [] 00:20:11.677 } 00:20:11.677 ] 00:20:11.677 }' 00:20:11.677 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1665596 00:20:11.677 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1665596 ']' 00:20:11.677 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1665596 00:20:11.677 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:11.677 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.677 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1665596 00:20:11.677 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:11.677 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:11.677 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1665596' 00:20:11.677 killing process with pid 1665596 00:20:11.677 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1665596 00:20:11.677 Received shutdown signal, test time was about 10.000000 seconds 00:20:11.677 00:20:11.677 Latency(us) 00:20:11.677 [2024-12-10T13:22:12.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.677 [2024-12-10T13:22:12.417Z] =================================================================================================================== 00:20:11.677 [2024-12-10T13:22:12.417Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:11.677 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1665596 00:20:11.936 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1665160 00:20:11.936 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1665160 ']' 00:20:11.936 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1665160 00:20:11.936 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:11.936 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.936 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1665160 00:20:11.936 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:11.936 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:11.936 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1665160' 00:20:11.936 killing process with pid 1665160 00:20:11.936 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1665160 00:20:11.936 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1665160 00:20:12.195 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:12.195 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:12.195 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:12.195 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:12.195 "subsystems": [ 00:20:12.195 { 00:20:12.195 "subsystem": "keyring", 00:20:12.195 "config": [ 00:20:12.195 { 00:20:12.195 "method": "keyring_file_add_key", 00:20:12.195 "params": { 00:20:12.195 "name": "key0", 00:20:12.195 "path": "/tmp/tmp.DeTIfxVeJ5" 00:20:12.195 } 00:20:12.195 } 00:20:12.195 ] 00:20:12.195 }, 00:20:12.195 { 00:20:12.195 "subsystem": "iobuf", 00:20:12.195 "config": [ 00:20:12.195 { 00:20:12.195 "method": "iobuf_set_options", 00:20:12.195 "params": { 00:20:12.195 "small_pool_count": 8192, 00:20:12.195 "large_pool_count": 1024, 00:20:12.195 "small_bufsize": 8192, 00:20:12.195 "large_bufsize": 135168, 00:20:12.195 "enable_numa": false 00:20:12.195 } 00:20:12.195 } 00:20:12.195 ] 00:20:12.195 }, 00:20:12.195 { 00:20:12.195 "subsystem": "sock", 00:20:12.195 "config": [ 00:20:12.195 { 00:20:12.195 "method": "sock_set_default_impl", 00:20:12.195 "params": { 00:20:12.195 "impl_name": "posix" 00:20:12.195 } 00:20:12.195 }, 00:20:12.195 { 00:20:12.195 "method": "sock_impl_set_options", 00:20:12.195 "params": { 00:20:12.195 "impl_name": "ssl", 00:20:12.195 "recv_buf_size": 4096, 00:20:12.195 "send_buf_size": 4096, 00:20:12.195 "enable_recv_pipe": true, 00:20:12.195 "enable_quickack": false, 00:20:12.195 "enable_placement_id": 0, 00:20:12.195 "enable_zerocopy_send_server": true, 00:20:12.195 "enable_zerocopy_send_client": false, 00:20:12.195 "zerocopy_threshold": 0, 00:20:12.195 "tls_version": 0, 00:20:12.195 "enable_ktls": false 00:20:12.195 } 00:20:12.195 }, 00:20:12.195 { 00:20:12.195 "method": "sock_impl_set_options", 00:20:12.195 "params": { 00:20:12.195 "impl_name": "posix", 00:20:12.195 "recv_buf_size": 2097152, 00:20:12.195 "send_buf_size": 2097152, 00:20:12.195 "enable_recv_pipe": true, 00:20:12.195 "enable_quickack": false, 00:20:12.195 "enable_placement_id": 0, 00:20:12.195 "enable_zerocopy_send_server": true, 00:20:12.195 "enable_zerocopy_send_client": false, 00:20:12.195 "zerocopy_threshold": 0, 00:20:12.195 "tls_version": 0, 00:20:12.195 "enable_ktls": false 00:20:12.195 } 00:20:12.195 } 00:20:12.195 ] 00:20:12.195 }, 00:20:12.195 { 00:20:12.195 "subsystem": "vmd", 00:20:12.195 "config": [] 00:20:12.195 }, 00:20:12.195 { 00:20:12.195 "subsystem": "accel", 00:20:12.195 "config": [ 00:20:12.195 { 00:20:12.195 "method": "accel_set_options", 00:20:12.195 "params": { 00:20:12.195 "small_cache_size": 128, 00:20:12.195 "large_cache_size": 16, 00:20:12.195 "task_count": 2048, 00:20:12.195 "sequence_count": 2048, 00:20:12.195 "buf_count": 2048 00:20:12.195 } 00:20:12.195 } 00:20:12.195 ] 00:20:12.195 }, 00:20:12.195 { 00:20:12.195 "subsystem": "bdev", 00:20:12.195 "config": [ 00:20:12.195 { 00:20:12.195 "method": "bdev_set_options", 00:20:12.195 "params": { 00:20:12.195 "bdev_io_pool_size": 65535, 00:20:12.195 "bdev_io_cache_size": 256, 00:20:12.195 "bdev_auto_examine": true, 00:20:12.195 "iobuf_small_cache_size": 128, 00:20:12.195 "iobuf_large_cache_size": 16 00:20:12.195 } 00:20:12.195 }, 00:20:12.195 { 00:20:12.195 "method": "bdev_raid_set_options", 00:20:12.195 "params": { 00:20:12.195 "process_window_size_kb": 1024, 00:20:12.195 "process_max_bandwidth_mb_sec": 0 00:20:12.195 } 00:20:12.195 }, 00:20:12.195 { 00:20:12.195 "method": "bdev_iscsi_set_options", 00:20:12.195 "params": { 00:20:12.195 "timeout_sec": 30 00:20:12.195 } 00:20:12.195 }, 00:20:12.195 { 00:20:12.195 "method": "bdev_nvme_set_options", 00:20:12.195 "params": { 00:20:12.195 "action_on_timeout": "none", 00:20:12.195 "timeout_us": 0, 00:20:12.195 "timeout_admin_us": 0, 00:20:12.195 "keep_alive_timeout_ms": 10000, 00:20:12.195 "arbitration_burst": 0, 00:20:12.195 "low_priority_weight": 0, 00:20:12.195 "medium_priority_weight": 0, 00:20:12.195 "high_priority_weight": 0, 00:20:12.195 "nvme_adminq_poll_period_us": 10000, 00:20:12.195 "nvme_ioq_poll_period_us": 0, 00:20:12.195 "io_queue_requests": 0, 00:20:12.195 "delay_cmd_submit": true, 00:20:12.195 "transport_retry_count": 4, 00:20:12.195 "bdev_retry_count": 3, 00:20:12.195 "transport_ack_timeout": 0, 00:20:12.195 "ctrlr_loss_timeout_sec": 0, 00:20:12.195 "reconnect_delay_sec": 0, 00:20:12.195 "fast_io_fail_timeout_sec": 0, 00:20:12.195 "disable_auto_failback": false, 00:20:12.195 "generate_uuids": false, 00:20:12.195 "transport_tos": 0, 00:20:12.195 "nvme_error_stat": false, 00:20:12.195 "rdma_srq_size": 0, 00:20:12.195 "io_path_stat": false, 00:20:12.195 "allow_accel_sequence": false, 00:20:12.195 "rdma_max_cq_size": 0, 00:20:12.195 "rdma_cm_event_timeout_ms": 0, 00:20:12.195 "dhchap_digests": [ 00:20:12.195 "sha256", 00:20:12.195 "sha384", 00:20:12.195 "sha512" 00:20:12.195 ], 00:20:12.195 "dhchap_dhgroups": [ 00:20:12.195 "null", 00:20:12.195 "ffdhe2048", 00:20:12.195 "ffdhe3072", 00:20:12.195 "ffdhe4096", 00:20:12.195 "ffdhe6144", 00:20:12.195 "ffdhe8192" 00:20:12.195 ] 00:20:12.195 } 00:20:12.195 }, 00:20:12.195 { 00:20:12.195 "method": "bdev_nvme_set_hotplug", 00:20:12.195 "params": { 00:20:12.195 "period_us": 100000, 00:20:12.195 "enable": false 00:20:12.195 } 00:20:12.195 }, 00:20:12.195 { 00:20:12.195 "method": "bdev_malloc_create", 00:20:12.195 "params": { 00:20:12.195 "name": "malloc0", 00:20:12.195 "num_blocks": 8192, 00:20:12.195 "block_size": 4096, 00:20:12.195 "physical_block_size": 4096, 00:20:12.195 "uuid": "a74b62c9-2842-43dd-8215-746f34695cf2", 00:20:12.195 "optimal_io_boundary": 0, 00:20:12.195 "md_size": 0, 00:20:12.195 "dif_type": 0, 00:20:12.195 "dif_is_head_of_md": false, 00:20:12.195 "dif_pi_format": 0 00:20:12.195 } 00:20:12.195 }, 00:20:12.195 { 00:20:12.195 "method": "bdev_wait_for_examine" 00:20:12.195 } 00:20:12.195 ] 00:20:12.195 }, 00:20:12.195 { 00:20:12.195 "subsystem": "nbd", 00:20:12.195 "config": [] 00:20:12.195 }, 00:20:12.195 { 00:20:12.195 "subsystem": "scheduler", 00:20:12.195 "config": [ 00:20:12.195 { 00:20:12.195 "method": "framework_set_scheduler", 00:20:12.195 "params": { 00:20:12.195 "name": "static" 00:20:12.195 } 00:20:12.195 } 00:20:12.195 ] 00:20:12.195 }, 00:20:12.195 { 00:20:12.195 "subsystem": "nvmf", 00:20:12.195 "config": [ 00:20:12.195 { 00:20:12.195 "method": "nvmf_set_config", 00:20:12.195 "params": { 00:20:12.195 "discovery_filter": "match_any", 00:20:12.195 "admin_cmd_passthru": { 00:20:12.195 "identify_ctrlr": false 00:20:12.195 }, 00:20:12.195 "dhchap_digests": [ 00:20:12.195 "sha256", 00:20:12.195 "sha384", 00:20:12.195 "sha512" 00:20:12.195 ], 00:20:12.195 "dhchap_dhgroups": [ 00:20:12.195 "null", 00:20:12.195 "ffdhe2048", 00:20:12.195 "ffdhe3072", 00:20:12.195 "ffdhe4096", 00:20:12.195 "ffdhe6144", 00:20:12.195 "ffdhe8192" 00:20:12.195 ] 00:20:12.195 } 00:20:12.195 }, 00:20:12.195 { 00:20:12.195 "method": "nvmf_set_max_subsystems", 00:20:12.196 "params": { 00:20:12.196 "max_subsystems": 1024 00:20:12.196 } 00:20:12.196 }, 00:20:12.196 { 00:20:12.196 "method": "nvmf_set_crdt", 00:20:12.196 "params": { 00:20:12.196 "crdt1": 0, 00:20:12.196 "crdt2": 0, 00:20:12.196 "crdt3": 0 00:20:12.196 } 00:20:12.196 }, 00:20:12.196 { 00:20:12.196 "method": "nvmf_create_transport", 00:20:12.196 "params": { 00:20:12.196 "trtype": "TCP", 00:20:12.196 "max_queue_depth": 128, 00:20:12.196 "max_io_qpairs_per_ctrlr": 127, 00:20:12.196 "in_capsule_data_size": 4096, 00:20:12.196 "max_io_size": 131072, 00:20:12.196 "io_unit_size": 131072, 00:20:12.196 "max_aq_depth": 128, 00:20:12.196 "num_shared_buffers": 511, 00:20:12.196 "buf_cache_size": 4294967295, 00:20:12.196 "dif_insert_or_strip": false, 00:20:12.196 "zcopy": false, 00:20:12.196 "c2h_success": false, 00:20:12.196 "sock_priority": 0, 00:20:12.196 "abort_timeout_sec": 1, 00:20:12.196 "ack_timeout": 0, 00:20:12.196 "data_wr_pool_size": 0 00:20:12.196 } 00:20:12.196 }, 00:20:12.196 { 00:20:12.196 "method": "nvmf_create_subsystem", 00:20:12.196 "params": { 00:20:12.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.196 "allow_any_host": false, 00:20:12.196 "serial_number": "SPDK00000000000001", 00:20:12.196 "model_number": "SPDK bdev Controller", 00:20:12.196 "max_namespaces": 10, 00:20:12.196 "min_cntlid": 1, 00:20:12.196 "max_cntlid": 65519, 00:20:12.196 "ana_reporting": false 00:20:12.196 } 00:20:12.196 }, 00:20:12.196 { 00:20:12.196 "method": "nvmf_subsystem_add_host", 00:20:12.196 "params": { 00:20:12.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.196 "host": "nqn.2016-06.io.spdk:host1", 00:20:12.196 "psk": "key0" 00:20:12.196 } 00:20:12.196 }, 00:20:12.196 { 00:20:12.196 "method": "nvmf_subsystem_add_ns", 00:20:12.196 "params": { 00:20:12.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.196 "namespace": { 00:20:12.196 "nsid": 1, 00:20:12.196 "bdev_name": "malloc0", 00:20:12.196 "nguid": "A74B62C9284243DD8215746F34695CF2", 00:20:12.196 "uuid": "a74b62c9-2842-43dd-8215-746f34695cf2", 00:20:12.196 "no_auto_visible": false 00:20:12.196 } 00:20:12.196 } 00:20:12.196 }, 00:20:12.196 { 00:20:12.196 "method": "nvmf_subsystem_add_listener", 00:20:12.196 "params": { 00:20:12.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.196 "listen_address": { 00:20:12.196 "trtype": "TCP", 00:20:12.196 "adrfam": "IPv4", 00:20:12.196 "traddr": "10.0.0.2", 00:20:12.196 "trsvcid": "4420" 00:20:12.196 }, 00:20:12.196 "secure_channel": true 00:20:12.196 } 00:20:12.196 } 00:20:12.196 ] 00:20:12.196 } 00:20:12.196 ] 00:20:12.196 }' 00:20:12.196 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.196 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1665862 00:20:12.196 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:12.196 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1665862 00:20:12.196 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1665862 ']' 00:20:12.196 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.196 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.196 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.196 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.196 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.196 [2024-12-10 14:22:12.804236] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:20:12.196 [2024-12-10 14:22:12.804280] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.196 [2024-12-10 14:22:12.883445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.196 [2024-12-10 14:22:12.919833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.196 [2024-12-10 14:22:12.919863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.196 [2024-12-10 14:22:12.919869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.196 [2024-12-10 14:22:12.919875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.196 [2024-12-10 14:22:12.919880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.196 [2024-12-10 14:22:12.920466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.454 [2024-12-10 14:22:13.133178] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.454 [2024-12-10 14:22:13.165205] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.454 [2024-12-10 14:22:13.165406] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.021 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.021 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:13.021 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:13.021 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:13.021 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.021 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.021 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1665971 00:20:13.021 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1665971 /var/tmp/bdevperf.sock 00:20:13.021 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1665971 ']' 00:20:13.021 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.021 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:13.021 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.021 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:13.021 "subsystems": [ 00:20:13.021 { 00:20:13.021 "subsystem": "keyring", 00:20:13.021 "config": [ 00:20:13.021 { 00:20:13.021 "method": "keyring_file_add_key", 00:20:13.021 "params": { 00:20:13.021 "name": "key0", 00:20:13.021 "path": "/tmp/tmp.DeTIfxVeJ5" 00:20:13.021 } 00:20:13.021 } 00:20:13.021 ] 00:20:13.021 }, 00:20:13.021 { 00:20:13.021 "subsystem": "iobuf", 00:20:13.021 "config": [ 00:20:13.021 { 00:20:13.021 "method": "iobuf_set_options", 00:20:13.021 "params": { 00:20:13.021 "small_pool_count": 8192, 00:20:13.021 "large_pool_count": 1024, 00:20:13.021 "small_bufsize": 8192, 00:20:13.021 "large_bufsize": 135168, 00:20:13.021 "enable_numa": false 00:20:13.021 } 00:20:13.021 } 00:20:13.021 ] 00:20:13.021 }, 00:20:13.021 { 00:20:13.021 "subsystem": "sock", 00:20:13.021 "config": [ 00:20:13.021 { 00:20:13.021 "method": "sock_set_default_impl", 00:20:13.021 "params": { 00:20:13.021 "impl_name": "posix" 00:20:13.021 } 00:20:13.021 }, 00:20:13.021 { 00:20:13.021 "method": "sock_impl_set_options", 00:20:13.021 "params": { 00:20:13.021 "impl_name": "ssl", 00:20:13.021 "recv_buf_size": 4096, 00:20:13.021 "send_buf_size": 4096, 00:20:13.021 "enable_recv_pipe": true, 00:20:13.021 "enable_quickack": false, 00:20:13.021 "enable_placement_id": 0, 00:20:13.021 "enable_zerocopy_send_server": true, 00:20:13.021 "enable_zerocopy_send_client": false, 00:20:13.021 "zerocopy_threshold": 0, 00:20:13.021 "tls_version": 0, 00:20:13.021 "enable_ktls": false 00:20:13.021 } 00:20:13.021 }, 00:20:13.021 { 00:20:13.021 "method": "sock_impl_set_options", 00:20:13.021 "params": { 00:20:13.021 "impl_name": "posix", 00:20:13.021 "recv_buf_size": 2097152, 00:20:13.021 "send_buf_size": 2097152, 00:20:13.021 "enable_recv_pipe": true, 00:20:13.021 "enable_quickack": false, 00:20:13.021 "enable_placement_id": 0, 00:20:13.021 "enable_zerocopy_send_server": true, 00:20:13.021 "enable_zerocopy_send_client": false, 00:20:13.021 "zerocopy_threshold": 0, 00:20:13.021 "tls_version": 0, 00:20:13.021 "enable_ktls": false 00:20:13.021 } 00:20:13.021 } 00:20:13.021 ] 00:20:13.021 }, 00:20:13.021 { 00:20:13.021 "subsystem": "vmd", 00:20:13.021 "config": [] 00:20:13.021 }, 00:20:13.021 { 00:20:13.021 "subsystem": "accel", 00:20:13.021 "config": [ 00:20:13.021 { 00:20:13.021 "method": "accel_set_options", 00:20:13.021 "params": { 00:20:13.021 "small_cache_size": 128, 00:20:13.021 "large_cache_size": 16, 00:20:13.021 "task_count": 2048, 00:20:13.021 "sequence_count": 2048, 00:20:13.021 "buf_count": 2048 00:20:13.021 } 00:20:13.021 } 00:20:13.021 ] 00:20:13.021 }, 00:20:13.021 { 00:20:13.021 "subsystem": "bdev", 00:20:13.022 "config": [ 00:20:13.022 { 00:20:13.022 "method": "bdev_set_options", 00:20:13.022 "params": { 00:20:13.022 "bdev_io_pool_size": 65535, 00:20:13.022 "bdev_io_cache_size": 256, 00:20:13.022 "bdev_auto_examine": true, 00:20:13.022 "iobuf_small_cache_size": 128, 00:20:13.022 "iobuf_large_cache_size": 16 00:20:13.022 } 00:20:13.022 }, 00:20:13.022 { 00:20:13.022 "method": "bdev_raid_set_options", 00:20:13.022 "params": { 00:20:13.022 "process_window_size_kb": 1024, 00:20:13.022 "process_max_bandwidth_mb_sec": 0 00:20:13.022 } 00:20:13.022 }, 00:20:13.022 { 00:20:13.022 "method": "bdev_iscsi_set_options", 00:20:13.022 "params": { 00:20:13.022 "timeout_sec": 30 00:20:13.022 } 00:20:13.022 }, 00:20:13.022 { 00:20:13.022 "method": "bdev_nvme_set_options", 00:20:13.022 "params": { 00:20:13.022 "action_on_timeout": "none", 00:20:13.022 "timeout_us": 0, 00:20:13.022 "timeout_admin_us": 0, 00:20:13.022 "keep_alive_timeout_ms": 10000, 00:20:13.022 "arbitration_burst": 0, 00:20:13.022 "low_priority_weight": 0, 00:20:13.022 "medium_priority_weight": 0, 00:20:13.022 "high_priority_weight": 0, 00:20:13.022 "nvme_adminq_poll_period_us": 10000, 00:20:13.022 "nvme_ioq_poll_period_us": 0, 00:20:13.022 "io_queue_requests": 512, 00:20:13.022 "delay_cmd_submit": true, 00:20:13.022 "transport_retry_count": 4, 00:20:13.022 "bdev_retry_count": 3, 00:20:13.022 "transport_ack_timeout": 0, 00:20:13.022 "ctrlr_loss_timeout_sec": 0, 00:20:13.022 "reconnect_delay_sec": 0, 00:20:13.022 "fast_io_fail_timeout_sec": 0, 00:20:13.022 "disable_auto_failback": false, 00:20:13.022 "generate_uuids": false, 00:20:13.022 "transport_tos": 0, 00:20:13.022 "nvme_error_stat": false, 00:20:13.022 "rdma_srq_size": 0, 00:20:13.022 "io_path_stat": false, 00:20:13.022 "allow_accel_sequence": false, 00:20:13.022 "rdma_max_cq_size": 0, 00:20:13.022 "rdma_cm_event_timeout_ms": 0, 00:20:13.022 "dhchap_digests": [ 00:20:13.022 "sha256", 00:20:13.022 "sha384", 00:20:13.022 "sha512" 00:20:13.022 ], 00:20:13.022 "dhchap_dhgroups": [ 00:20:13.022 "null", 00:20:13.022 "ffdhe2048", 00:20:13.022 "ffdhe3072", 00:20:13.022 "ffdhe4096", 00:20:13.022 "ffdhe6144", 00:20:13.022 "ffdhe8192" 00:20:13.022 ] 00:20:13.022 } 00:20:13.022 }, 00:20:13.022 { 00:20:13.022 "method": "bdev_nvme_attach_controller", 00:20:13.022 "params": { 00:20:13.022 "name": "TLSTEST", 00:20:13.022 "trtype": "TCP", 00:20:13.022 "adrfam": "IPv4", 00:20:13.022 "traddr": "10.0.0.2", 00:20:13.022 "trsvcid": "4420", 00:20:13.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.022 "prchk_reftag": false, 00:20:13.022 "prchk_guard": false, 00:20:13.022 "ctrlr_loss_timeout_sec": 0, 00:20:13.022 "reconnect_delay_sec": 0, 00:20:13.022 "fast_io_fail_timeout_sec": 0, 00:20:13.022 "psk": "key0", 00:20:13.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:13.022 "hdgst": false, 00:20:13.022 "ddgst": false, 00:20:13.022 "multipath": "multipath" 00:20:13.022 } 00:20:13.022 }, 00:20:13.022 { 00:20:13.022 "method": "bdev_nvme_set_hotplug", 00:20:13.022 "params": { 00:20:13.022 "period_us": 100000, 00:20:13.022 "enable": false 00:20:13.022 } 00:20:13.022 }, 00:20:13.022 { 00:20:13.022 "method": "bdev_wait_for_examine" 00:20:13.022 } 00:20:13.022 ] 00:20:13.022 }, 00:20:13.022 { 00:20:13.022 "subsystem": "nbd", 00:20:13.022 "config": [] 00:20:13.022 } 00:20:13.022 ] 00:20:13.022 }' 00:20:13.022 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.022 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.022 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.022 [2024-12-10 14:22:13.735350] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:20:13.022 [2024-12-10 14:22:13.735402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665971 ] 00:20:13.280 [2024-12-10 14:22:13.815851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.280 [2024-12-10 14:22:13.857112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.280 [2024-12-10 14:22:14.010788] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.215 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.215 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:14.215 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:14.215 Running I/O for 10 seconds... 00:20:16.082 5490.00 IOPS, 21.45 MiB/s [2024-12-10T13:22:17.756Z] 5548.50 IOPS, 21.67 MiB/s [2024-12-10T13:22:18.689Z] 5583.00 IOPS, 21.81 MiB/s [2024-12-10T13:22:20.063Z] 5582.75 IOPS, 21.81 MiB/s [2024-12-10T13:22:20.996Z] 5624.00 IOPS, 21.97 MiB/s [2024-12-10T13:22:21.928Z] 5606.83 IOPS, 21.90 MiB/s [2024-12-10T13:22:22.862Z] 5621.29 IOPS, 21.96 MiB/s [2024-12-10T13:22:23.796Z] 5604.00 IOPS, 21.89 MiB/s [2024-12-10T13:22:24.730Z] 5609.11 IOPS, 21.91 MiB/s [2024-12-10T13:22:24.730Z] 5601.40 IOPS, 21.88 MiB/s 00:20:23.990 Latency(us) 00:20:23.990 [2024-12-10T13:22:24.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.990 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:23.990 Verification LBA range: start 0x0 length 0x2000 00:20:23.990 TLSTESTn1 : 10.02 5604.73 21.89 0.00 0.00 22801.30 6272.73 23717.79 00:20:23.990 [2024-12-10T13:22:24.730Z] =================================================================================================================== 00:20:23.990 [2024-12-10T13:22:24.730Z] Total : 5604.73 21.89 0.00 0.00 22801.30 6272.73 23717.79 00:20:23.991 { 00:20:23.991 "results": [ 00:20:23.991 { 00:20:23.991 "job": "TLSTESTn1", 00:20:23.991 "core_mask": "0x4", 00:20:23.991 "workload": "verify", 00:20:23.991 "status": "finished", 00:20:23.991 "verify_range": { 00:20:23.991 "start": 0, 00:20:23.991 "length": 8192 00:20:23.991 }, 00:20:23.991 "queue_depth": 128, 00:20:23.991 "io_size": 4096, 00:20:23.991 "runtime": 10.016539, 00:20:23.991 "iops": 5604.730336496468, 00:20:23.991 "mibps": 21.89347787693933, 00:20:23.991 "io_failed": 0, 00:20:23.991 "io_timeout": 0, 00:20:23.991 "avg_latency_us": 22801.29708273534, 00:20:23.991 "min_latency_us": 6272.731428571428, 00:20:23.991 "max_latency_us": 23717.790476190476 00:20:23.991 } 00:20:23.991 ], 00:20:23.991 "core_count": 1 00:20:23.991 } 00:20:24.249 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:24.249 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1665971 00:20:24.249 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1665971 ']' 00:20:24.249 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1665971 00:20:24.249 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:24.249 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.249 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1665971 00:20:24.249 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:24.249 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:24.249 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1665971' 00:20:24.249 killing process with pid 1665971 00:20:24.249 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1665971 00:20:24.249 Received shutdown signal, test time was about 10.000000 seconds 00:20:24.249 00:20:24.249 Latency(us) 00:20:24.249 [2024-12-10T13:22:24.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.249 [2024-12-10T13:22:24.990Z] =================================================================================================================== 00:20:24.250 [2024-12-10T13:22:24.990Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.250 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1665971 00:20:24.250 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1665862 00:20:24.250 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1665862 ']' 00:20:24.250 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1665862 00:20:24.250 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:24.250 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.250 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1665862 00:20:24.510 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:24.510 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:24.510 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1665862' 00:20:24.510 killing process with pid 1665862 00:20:24.510 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1665862 00:20:24.510 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1665862 00:20:24.510 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:24.510 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:24.510 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:24.510 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.510 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1667898 00:20:24.510 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:24.510 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1667898 00:20:24.510 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1667898 ']' 00:20:24.510 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.510 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.510 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.510 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.510 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.510 [2024-12-10 14:22:25.216454] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:20:24.510 [2024-12-10 14:22:25.216503] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.769 [2024-12-10 14:22:25.301011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.769 [2024-12-10 14:22:25.338610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.769 [2024-12-10 14:22:25.338645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.769 [2024-12-10 14:22:25.338651] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.769 [2024-12-10 14:22:25.338657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.769 [2024-12-10 14:22:25.338663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.769 [2024-12-10 14:22:25.339158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.769 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.769 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:24.769 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:24.769 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:24.770 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.770 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.770 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.DeTIfxVeJ5 00:20:24.770 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DeTIfxVeJ5 00:20:24.770 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:25.027 [2024-12-10 14:22:25.642385] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.027 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:25.285 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:25.543 [2024-12-10 14:22:26.039389] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.543 [2024-12-10 14:22:26.039596] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.543 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:25.543 malloc0 00:20:25.543 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:25.800 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DeTIfxVeJ5 00:20:26.058 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:26.316 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:26.316 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1668183 00:20:26.316 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:26.316 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1668183 /var/tmp/bdevperf.sock 00:20:26.316 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1668183 ']' 00:20:26.316 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.316 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.316 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.316 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.316 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.316 [2024-12-10 14:22:26.929923] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:20:26.316 [2024-12-10 14:22:26.929975] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668183 ] 00:20:26.316 [2024-12-10 14:22:27.009718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.316 [2024-12-10 14:22:27.050924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.249 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.249 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:27.249 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DeTIfxVeJ5 00:20:27.249 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:27.507 [2024-12-10 14:22:28.153045] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.507 nvme0n1 00:20:27.765 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:27.766 Running I/O for 1 seconds... 00:20:28.700 5528.00 IOPS, 21.59 MiB/s 00:20:28.700 Latency(us) 00:20:28.700 [2024-12-10T13:22:29.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.700 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:28.700 Verification LBA range: start 0x0 length 0x2000 00:20:28.700 nvme0n1 : 1.01 5586.31 21.82 0.00 0.00 22765.62 4930.80 24841.26 00:20:28.700 [2024-12-10T13:22:29.440Z] =================================================================================================================== 00:20:28.700 [2024-12-10T13:22:29.440Z] Total : 5586.31 21.82 0.00 0.00 22765.62 4930.80 24841.26 00:20:28.700 { 00:20:28.700 "results": [ 00:20:28.700 { 00:20:28.700 "job": "nvme0n1", 00:20:28.700 "core_mask": "0x2", 00:20:28.700 "workload": "verify", 00:20:28.700 "status": "finished", 00:20:28.700 "verify_range": { 00:20:28.700 "start": 0, 00:20:28.700 "length": 8192 00:20:28.700 }, 00:20:28.700 "queue_depth": 128, 00:20:28.700 "io_size": 4096, 00:20:28.700 "runtime": 1.012476, 00:20:28.700 "iops": 5586.305255630751, 00:20:28.700 "mibps": 21.82150490480762, 00:20:28.700 "io_failed": 0, 00:20:28.700 "io_timeout": 0, 00:20:28.700 "avg_latency_us": 22765.621520845958, 00:20:28.700 "min_latency_us": 4930.80380952381, 00:20:28.700 "max_latency_us": 24841.26476190476 00:20:28.700 } 00:20:28.700 ], 00:20:28.700 "core_count": 1 00:20:28.700 } 00:20:28.700 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1668183 00:20:28.700 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1668183 ']' 00:20:28.700 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1668183 00:20:28.700 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:28.700 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.700 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1668183 00:20:28.700 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:28.700 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:28.700 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1668183' 00:20:28.700 killing process with pid 1668183 00:20:28.700 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1668183 00:20:28.700 Received shutdown signal, test time was about 1.000000 seconds 00:20:28.700 00:20:28.700 Latency(us) 00:20:28.700 [2024-12-10T13:22:29.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.700 [2024-12-10T13:22:29.440Z] =================================================================================================================== 00:20:28.700 [2024-12-10T13:22:29.440Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:28.700 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1668183 00:20:28.959 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1667898 00:20:28.959 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1667898 ']' 00:20:28.959 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1667898 00:20:28.959 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:28.959 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.959 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1667898 00:20:28.959 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:28.959 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:28.959 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1667898' 00:20:28.959 killing process with pid 1667898 00:20:28.959 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1667898 00:20:28.959 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1667898 00:20:29.217 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:29.217 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:29.217 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.217 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.217 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1668649 00:20:29.217 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:29.217 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1668649 00:20:29.217 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1668649 ']' 00:20:29.217 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.217 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.217 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.217 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.217 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.217 [2024-12-10 14:22:29.861640] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:20:29.217 [2024-12-10 14:22:29.861690] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.217 [2024-12-10 14:22:29.941782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.475 [2024-12-10 14:22:29.981607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.475 [2024-12-10 14:22:29.981638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.475 [2024-12-10 14:22:29.981645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.475 [2024-12-10 14:22:29.981651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.475 [2024-12-10 14:22:29.981657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.475 [2024-12-10 14:22:29.982155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.475 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.475 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:29.475 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:29.475 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:29.475 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.475 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.475 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:29.475 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.475 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.475 [2024-12-10 14:22:30.131026] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.475 malloc0 00:20:29.475 [2024-12-10 14:22:30.159126] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:29.475 [2024-12-10 14:22:30.159329] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.476 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.476 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1668671 00:20:29.476 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:29.476 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1668671 /var/tmp/bdevperf.sock 00:20:29.476 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1668671 ']' 00:20:29.476 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.476 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.476 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.476 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.476 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.733 [2024-12-10 14:22:30.231521] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:20:29.733 [2024-12-10 14:22:30.231561] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668671 ] 00:20:29.733 [2024-12-10 14:22:30.312866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.733 [2024-12-10 14:22:30.352342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.733 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.734 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:29.734 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DeTIfxVeJ5 00:20:29.991 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:30.249 [2024-12-10 14:22:30.829166] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:30.249 nvme0n1 00:20:30.249 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:30.507 Running I/O for 1 seconds... 00:20:31.441 5664.00 IOPS, 22.12 MiB/s 00:20:31.441 Latency(us) 00:20:31.441 [2024-12-10T13:22:32.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.441 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:31.441 Verification LBA range: start 0x0 length 0x2000 00:20:31.441 nvme0n1 : 1.02 5687.99 22.22 0.00 0.00 22327.79 6147.90 22094.99 00:20:31.441 [2024-12-10T13:22:32.181Z] =================================================================================================================== 00:20:31.441 [2024-12-10T13:22:32.181Z] Total : 5687.99 22.22 0.00 0.00 22327.79 6147.90 22094.99 00:20:31.441 { 00:20:31.441 "results": [ 00:20:31.441 { 00:20:31.441 "job": "nvme0n1", 00:20:31.441 "core_mask": "0x2", 00:20:31.441 "workload": "verify", 00:20:31.441 "status": "finished", 00:20:31.441 "verify_range": { 00:20:31.441 "start": 0, 00:20:31.441 "length": 8192 00:20:31.441 }, 00:20:31.441 "queue_depth": 128, 00:20:31.441 "io_size": 4096, 00:20:31.441 "runtime": 1.018286, 00:20:31.441 "iops": 5687.989425367726, 00:20:31.441 "mibps": 22.21870869284268, 00:20:31.441 "io_failed": 0, 00:20:31.441 "io_timeout": 0, 00:20:31.441 "avg_latency_us": 22327.790623520126, 00:20:31.441 "min_latency_us": 6147.900952380953, 00:20:31.441 "max_latency_us": 22094.994285714285 00:20:31.441 } 00:20:31.441 ], 00:20:31.441 "core_count": 1 00:20:31.441 } 00:20:31.441 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:31.441 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.441 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.441 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.441 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:31.441 "subsystems": [ 00:20:31.441 { 00:20:31.441 "subsystem": "keyring", 00:20:31.441 "config": [ 00:20:31.441 { 00:20:31.441 "method": "keyring_file_add_key", 00:20:31.441 "params": { 00:20:31.441 "name": "key0", 00:20:31.441 "path": "/tmp/tmp.DeTIfxVeJ5" 00:20:31.441 } 00:20:31.441 } 00:20:31.441 ] 00:20:31.441 }, 00:20:31.441 { 00:20:31.441 "subsystem": "iobuf", 00:20:31.441 "config": [ 00:20:31.441 { 00:20:31.441 "method": "iobuf_set_options", 00:20:31.441 "params": { 00:20:31.441 "small_pool_count": 8192, 00:20:31.441 "large_pool_count": 1024, 00:20:31.441 "small_bufsize": 8192, 00:20:31.441 "large_bufsize": 135168, 00:20:31.441 "enable_numa": false 00:20:31.441 } 00:20:31.441 } 00:20:31.441 ] 00:20:31.441 }, 00:20:31.441 { 00:20:31.441 "subsystem": "sock", 00:20:31.441 "config": [ 00:20:31.441 { 00:20:31.441 "method": "sock_set_default_impl", 00:20:31.441 "params": { 00:20:31.441 "impl_name": "posix" 00:20:31.441 } 00:20:31.441 }, 00:20:31.441 { 00:20:31.441 "method": "sock_impl_set_options", 00:20:31.441 "params": { 00:20:31.441 "impl_name": "ssl", 00:20:31.441 "recv_buf_size": 4096, 00:20:31.441 "send_buf_size": 4096, 00:20:31.441 "enable_recv_pipe": true, 00:20:31.441 "enable_quickack": false, 00:20:31.441 "enable_placement_id": 0, 00:20:31.441 "enable_zerocopy_send_server": true, 00:20:31.441 "enable_zerocopy_send_client": false, 00:20:31.441 "zerocopy_threshold": 0, 00:20:31.441 "tls_version": 0, 00:20:31.441 "enable_ktls": false 00:20:31.441 } 00:20:31.441 }, 00:20:31.441 { 00:20:31.441 "method": "sock_impl_set_options", 00:20:31.441 "params": { 00:20:31.441 "impl_name": "posix", 00:20:31.441 "recv_buf_size": 2097152, 00:20:31.441 "send_buf_size": 2097152, 00:20:31.441 "enable_recv_pipe": true, 00:20:31.441 "enable_quickack": false, 00:20:31.441 "enable_placement_id": 0, 00:20:31.441 "enable_zerocopy_send_server": true, 00:20:31.442 "enable_zerocopy_send_client": false, 00:20:31.442 "zerocopy_threshold": 0, 00:20:31.442 "tls_version": 0, 00:20:31.442 "enable_ktls": false 00:20:31.442 } 00:20:31.442 } 00:20:31.442 ] 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "subsystem": "vmd", 00:20:31.442 "config": [] 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "subsystem": "accel", 00:20:31.442 "config": [ 00:20:31.442 { 00:20:31.442 "method": "accel_set_options", 00:20:31.442 "params": { 00:20:31.442 "small_cache_size": 128, 00:20:31.442 "large_cache_size": 16, 00:20:31.442 "task_count": 2048, 00:20:31.442 "sequence_count": 2048, 00:20:31.442 "buf_count": 2048 00:20:31.442 } 00:20:31.442 } 00:20:31.442 ] 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "subsystem": "bdev", 00:20:31.442 "config": [ 00:20:31.442 { 00:20:31.442 "method": "bdev_set_options", 00:20:31.442 "params": { 00:20:31.442 "bdev_io_pool_size": 65535, 00:20:31.442 "bdev_io_cache_size": 256, 00:20:31.442 "bdev_auto_examine": true, 00:20:31.442 "iobuf_small_cache_size": 128, 00:20:31.442 "iobuf_large_cache_size": 16 00:20:31.442 } 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "method": "bdev_raid_set_options", 00:20:31.442 "params": { 00:20:31.442 "process_window_size_kb": 1024, 00:20:31.442 "process_max_bandwidth_mb_sec": 0 00:20:31.442 } 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "method": "bdev_iscsi_set_options", 00:20:31.442 "params": { 00:20:31.442 "timeout_sec": 30 00:20:31.442 } 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "method": "bdev_nvme_set_options", 00:20:31.442 "params": { 00:20:31.442 "action_on_timeout": "none", 00:20:31.442 "timeout_us": 0, 00:20:31.442 "timeout_admin_us": 0, 00:20:31.442 "keep_alive_timeout_ms": 10000, 00:20:31.442 "arbitration_burst": 0, 00:20:31.442 "low_priority_weight": 0, 00:20:31.442 "medium_priority_weight": 0, 00:20:31.442 "high_priority_weight": 0, 00:20:31.442 "nvme_adminq_poll_period_us": 10000, 00:20:31.442 "nvme_ioq_poll_period_us": 0, 00:20:31.442 "io_queue_requests": 0, 00:20:31.442 "delay_cmd_submit": true, 00:20:31.442 "transport_retry_count": 4, 00:20:31.442 "bdev_retry_count": 3, 00:20:31.442 "transport_ack_timeout": 0, 00:20:31.442 "ctrlr_loss_timeout_sec": 0, 00:20:31.442 "reconnect_delay_sec": 0, 00:20:31.442 "fast_io_fail_timeout_sec": 0, 00:20:31.442 "disable_auto_failback": false, 00:20:31.442 "generate_uuids": false, 00:20:31.442 "transport_tos": 0, 00:20:31.442 "nvme_error_stat": false, 00:20:31.442 "rdma_srq_size": 0, 00:20:31.442 "io_path_stat": false, 00:20:31.442 "allow_accel_sequence": false, 00:20:31.442 "rdma_max_cq_size": 0, 00:20:31.442 "rdma_cm_event_timeout_ms": 0, 00:20:31.442 "dhchap_digests": [ 00:20:31.442 "sha256", 00:20:31.442 "sha384", 00:20:31.442 "sha512" 00:20:31.442 ], 00:20:31.442 "dhchap_dhgroups": [ 00:20:31.442 "null", 00:20:31.442 "ffdhe2048", 00:20:31.442 "ffdhe3072", 00:20:31.442 "ffdhe4096", 00:20:31.442 "ffdhe6144", 00:20:31.442 "ffdhe8192" 00:20:31.442 ] 00:20:31.442 } 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "method": "bdev_nvme_set_hotplug", 00:20:31.442 "params": { 00:20:31.442 "period_us": 100000, 00:20:31.442 "enable": false 00:20:31.442 } 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "method": "bdev_malloc_create", 00:20:31.442 "params": { 00:20:31.442 "name": "malloc0", 00:20:31.442 "num_blocks": 8192, 00:20:31.442 "block_size": 4096, 00:20:31.442 "physical_block_size": 4096, 00:20:31.442 "uuid": "446681b2-8ab0-4205-bb3f-d157bb20c5c2", 00:20:31.442 "optimal_io_boundary": 0, 00:20:31.442 "md_size": 0, 00:20:31.442 "dif_type": 0, 00:20:31.442 "dif_is_head_of_md": false, 00:20:31.442 "dif_pi_format": 0 00:20:31.442 } 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "method": "bdev_wait_for_examine" 00:20:31.442 } 00:20:31.442 ] 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "subsystem": "nbd", 00:20:31.442 "config": [] 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "subsystem": "scheduler", 00:20:31.442 "config": [ 00:20:31.442 { 00:20:31.442 "method": "framework_set_scheduler", 00:20:31.442 "params": { 00:20:31.442 "name": "static" 00:20:31.442 } 00:20:31.442 } 00:20:31.442 ] 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "subsystem": "nvmf", 00:20:31.442 "config": [ 00:20:31.442 { 00:20:31.442 "method": "nvmf_set_config", 00:20:31.442 "params": { 00:20:31.442 "discovery_filter": "match_any", 00:20:31.442 "admin_cmd_passthru": { 00:20:31.442 "identify_ctrlr": false 00:20:31.442 }, 00:20:31.442 "dhchap_digests": [ 00:20:31.442 "sha256", 00:20:31.442 "sha384", 00:20:31.442 "sha512" 00:20:31.442 ], 00:20:31.442 "dhchap_dhgroups": [ 00:20:31.442 "null", 00:20:31.442 "ffdhe2048", 00:20:31.442 "ffdhe3072", 00:20:31.442 "ffdhe4096", 00:20:31.442 "ffdhe6144", 00:20:31.442 "ffdhe8192" 00:20:31.442 ] 00:20:31.442 } 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "method": "nvmf_set_max_subsystems", 00:20:31.442 "params": { 00:20:31.442 "max_subsystems": 1024 00:20:31.442 } 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "method": "nvmf_set_crdt", 00:20:31.442 "params": { 00:20:31.442 "crdt1": 0, 00:20:31.442 "crdt2": 0, 00:20:31.442 "crdt3": 0 00:20:31.442 } 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "method": "nvmf_create_transport", 00:20:31.442 "params": { 00:20:31.442 "trtype": "TCP", 00:20:31.442 "max_queue_depth": 128, 00:20:31.442 "max_io_qpairs_per_ctrlr": 127, 00:20:31.442 "in_capsule_data_size": 4096, 00:20:31.442 "max_io_size": 131072, 00:20:31.442 "io_unit_size": 131072, 00:20:31.442 "max_aq_depth": 128, 00:20:31.442 "num_shared_buffers": 511, 00:20:31.442 "buf_cache_size": 4294967295, 00:20:31.442 "dif_insert_or_strip": false, 00:20:31.442 "zcopy": false, 00:20:31.442 "c2h_success": false, 00:20:31.442 "sock_priority": 0, 00:20:31.442 "abort_timeout_sec": 1, 00:20:31.442 "ack_timeout": 0, 00:20:31.442 "data_wr_pool_size": 0 00:20:31.442 } 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "method": "nvmf_create_subsystem", 00:20:31.442 "params": { 00:20:31.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.442 "allow_any_host": false, 00:20:31.442 "serial_number": "00000000000000000000", 00:20:31.442 "model_number": "SPDK bdev Controller", 00:20:31.442 "max_namespaces": 32, 00:20:31.442 "min_cntlid": 1, 00:20:31.442 "max_cntlid": 65519, 00:20:31.442 "ana_reporting": false 00:20:31.442 } 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "method": "nvmf_subsystem_add_host", 00:20:31.442 "params": { 00:20:31.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.442 "host": "nqn.2016-06.io.spdk:host1", 00:20:31.442 "psk": "key0" 00:20:31.442 } 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "method": "nvmf_subsystem_add_ns", 00:20:31.442 "params": { 00:20:31.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.442 "namespace": { 00:20:31.442 "nsid": 1, 00:20:31.442 "bdev_name": "malloc0", 00:20:31.442 "nguid": "446681B28AB04205BB3FD157BB20C5C2", 00:20:31.442 "uuid": "446681b2-8ab0-4205-bb3f-d157bb20c5c2", 00:20:31.442 "no_auto_visible": false 00:20:31.442 } 00:20:31.442 } 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "method": "nvmf_subsystem_add_listener", 00:20:31.442 "params": { 00:20:31.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.442 "listen_address": { 00:20:31.442 "trtype": "TCP", 00:20:31.442 "adrfam": "IPv4", 00:20:31.442 "traddr": "10.0.0.2", 00:20:31.442 "trsvcid": "4420" 00:20:31.442 }, 00:20:31.442 "secure_channel": false, 00:20:31.442 "sock_impl": "ssl" 00:20:31.442 } 00:20:31.442 } 00:20:31.442 ] 00:20:31.442 } 00:20:31.442 ] 00:20:31.442 }' 00:20:31.443 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:31.701 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:31.701 "subsystems": [ 00:20:31.701 { 00:20:31.701 "subsystem": "keyring", 00:20:31.701 "config": [ 00:20:31.701 { 00:20:31.701 "method": "keyring_file_add_key", 00:20:31.701 "params": { 00:20:31.701 "name": "key0", 00:20:31.701 "path": "/tmp/tmp.DeTIfxVeJ5" 00:20:31.701 } 00:20:31.701 } 00:20:31.701 ] 00:20:31.701 }, 00:20:31.701 { 00:20:31.701 "subsystem": "iobuf", 00:20:31.701 "config": [ 00:20:31.701 { 00:20:31.701 "method": "iobuf_set_options", 00:20:31.701 "params": { 00:20:31.701 "small_pool_count": 8192, 00:20:31.701 "large_pool_count": 1024, 00:20:31.701 "small_bufsize": 8192, 00:20:31.701 "large_bufsize": 135168, 00:20:31.701 "enable_numa": false 00:20:31.701 } 00:20:31.701 } 00:20:31.701 ] 00:20:31.701 }, 00:20:31.701 { 00:20:31.701 "subsystem": "sock", 00:20:31.701 "config": [ 00:20:31.701 { 00:20:31.701 "method": "sock_set_default_impl", 00:20:31.701 "params": { 00:20:31.701 "impl_name": "posix" 00:20:31.701 } 00:20:31.701 }, 00:20:31.701 { 00:20:31.701 "method": "sock_impl_set_options", 00:20:31.701 "params": { 00:20:31.701 "impl_name": "ssl", 00:20:31.701 "recv_buf_size": 4096, 00:20:31.701 "send_buf_size": 4096, 00:20:31.701 "enable_recv_pipe": true, 00:20:31.701 "enable_quickack": false, 00:20:31.701 "enable_placement_id": 0, 00:20:31.701 "enable_zerocopy_send_server": true, 00:20:31.701 "enable_zerocopy_send_client": false, 00:20:31.701 "zerocopy_threshold": 0, 00:20:31.701 "tls_version": 0, 00:20:31.701 "enable_ktls": false 00:20:31.701 } 00:20:31.701 }, 00:20:31.701 { 00:20:31.701 "method": "sock_impl_set_options", 00:20:31.701 "params": { 00:20:31.701 "impl_name": "posix", 00:20:31.701 "recv_buf_size": 2097152, 00:20:31.701 "send_buf_size": 2097152, 00:20:31.701 "enable_recv_pipe": true, 00:20:31.701 "enable_quickack": false, 00:20:31.701 "enable_placement_id": 0, 00:20:31.701 "enable_zerocopy_send_server": true, 00:20:31.701 "enable_zerocopy_send_client": false, 00:20:31.701 "zerocopy_threshold": 0, 00:20:31.701 "tls_version": 0, 00:20:31.701 "enable_ktls": false 00:20:31.701 } 00:20:31.701 } 00:20:31.701 ] 00:20:31.701 }, 00:20:31.701 { 00:20:31.701 "subsystem": "vmd", 00:20:31.701 "config": [] 00:20:31.701 }, 00:20:31.701 { 00:20:31.701 "subsystem": "accel", 00:20:31.701 "config": [ 00:20:31.701 { 00:20:31.701 "method": "accel_set_options", 00:20:31.701 "params": { 00:20:31.701 "small_cache_size": 128, 00:20:31.701 "large_cache_size": 16, 00:20:31.701 "task_count": 2048, 00:20:31.701 "sequence_count": 2048, 00:20:31.701 "buf_count": 2048 00:20:31.701 } 00:20:31.701 } 00:20:31.701 ] 00:20:31.701 }, 00:20:31.701 { 00:20:31.701 "subsystem": "bdev", 00:20:31.701 "config": [ 00:20:31.701 { 00:20:31.701 "method": "bdev_set_options", 00:20:31.701 "params": { 00:20:31.701 "bdev_io_pool_size": 65535, 00:20:31.701 "bdev_io_cache_size": 256, 00:20:31.701 "bdev_auto_examine": true, 00:20:31.701 "iobuf_small_cache_size": 128, 00:20:31.701 "iobuf_large_cache_size": 16 00:20:31.701 } 00:20:31.701 }, 00:20:31.701 { 00:20:31.701 "method": "bdev_raid_set_options", 00:20:31.701 "params": { 00:20:31.701 "process_window_size_kb": 1024, 00:20:31.701 "process_max_bandwidth_mb_sec": 0 00:20:31.701 } 00:20:31.701 }, 00:20:31.701 { 00:20:31.701 "method": "bdev_iscsi_set_options", 00:20:31.701 "params": { 00:20:31.701 "timeout_sec": 30 00:20:31.701 } 00:20:31.701 }, 00:20:31.701 { 00:20:31.701 "method": "bdev_nvme_set_options", 00:20:31.701 "params": { 00:20:31.701 "action_on_timeout": "none", 00:20:31.701 "timeout_us": 0, 00:20:31.701 "timeout_admin_us": 0, 00:20:31.701 "keep_alive_timeout_ms": 10000, 00:20:31.701 "arbitration_burst": 0, 00:20:31.701 "low_priority_weight": 0, 00:20:31.701 "medium_priority_weight": 0, 00:20:31.701 "high_priority_weight": 0, 00:20:31.701 "nvme_adminq_poll_period_us": 10000, 00:20:31.701 "nvme_ioq_poll_period_us": 0, 00:20:31.701 "io_queue_requests": 512, 00:20:31.701 "delay_cmd_submit": true, 00:20:31.701 "transport_retry_count": 4, 00:20:31.701 "bdev_retry_count": 3, 00:20:31.701 "transport_ack_timeout": 0, 00:20:31.701 "ctrlr_loss_timeout_sec": 0, 00:20:31.701 "reconnect_delay_sec": 0, 00:20:31.701 "fast_io_fail_timeout_sec": 0, 00:20:31.701 "disable_auto_failback": false, 00:20:31.701 "generate_uuids": false, 00:20:31.701 "transport_tos": 0, 00:20:31.701 "nvme_error_stat": false, 00:20:31.701 "rdma_srq_size": 0, 00:20:31.701 "io_path_stat": false, 00:20:31.701 "allow_accel_sequence": false, 00:20:31.701 "rdma_max_cq_size": 0, 00:20:31.701 "rdma_cm_event_timeout_ms": 0, 00:20:31.701 "dhchap_digests": [ 00:20:31.701 "sha256", 00:20:31.701 "sha384", 00:20:31.701 "sha512" 00:20:31.701 ], 00:20:31.701 "dhchap_dhgroups": [ 00:20:31.701 "null", 00:20:31.701 "ffdhe2048", 00:20:31.701 "ffdhe3072", 00:20:31.701 "ffdhe4096", 00:20:31.701 "ffdhe6144", 00:20:31.701 "ffdhe8192" 00:20:31.701 ] 00:20:31.701 } 00:20:31.701 }, 00:20:31.701 { 00:20:31.701 "method": "bdev_nvme_attach_controller", 00:20:31.701 "params": { 00:20:31.701 "name": "nvme0", 00:20:31.701 "trtype": "TCP", 00:20:31.701 "adrfam": "IPv4", 00:20:31.701 "traddr": "10.0.0.2", 00:20:31.701 "trsvcid": "4420", 00:20:31.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.701 "prchk_reftag": false, 00:20:31.701 "prchk_guard": false, 00:20:31.701 "ctrlr_loss_timeout_sec": 0, 00:20:31.701 "reconnect_delay_sec": 0, 00:20:31.701 "fast_io_fail_timeout_sec": 0, 00:20:31.701 "psk": "key0", 00:20:31.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.701 "hdgst": false, 00:20:31.701 "ddgst": false, 00:20:31.701 "multipath": "multipath" 00:20:31.701 } 00:20:31.701 }, 00:20:31.701 { 00:20:31.701 "method": "bdev_nvme_set_hotplug", 00:20:31.701 "params": { 00:20:31.701 "period_us": 100000, 00:20:31.702 "enable": false 00:20:31.702 } 00:20:31.702 }, 00:20:31.702 { 00:20:31.702 "method": "bdev_enable_histogram", 00:20:31.702 "params": { 00:20:31.702 "name": "nvme0n1", 00:20:31.702 "enable": true 00:20:31.702 } 00:20:31.702 }, 00:20:31.702 { 00:20:31.702 "method": "bdev_wait_for_examine" 00:20:31.702 } 00:20:31.702 ] 00:20:31.702 }, 00:20:31.702 { 00:20:31.702 "subsystem": "nbd", 00:20:31.702 "config": [] 00:20:31.702 } 00:20:31.702 ] 00:20:31.702 }' 00:20:31.702 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1668671 00:20:31.702 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1668671 ']' 00:20:31.702 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1668671 00:20:31.702 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:31.702 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.702 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1668671 00:20:31.960 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:31.960 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:31.960 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1668671' 00:20:31.960 killing process with pid 1668671 00:20:31.960 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1668671 00:20:31.960 Received shutdown signal, test time was about 1.000000 seconds 00:20:31.960 00:20:31.960 Latency(us) 00:20:31.960 [2024-12-10T13:22:32.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.960 [2024-12-10T13:22:32.700Z] =================================================================================================================== 00:20:31.960 [2024-12-10T13:22:32.700Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:31.960 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1668671 00:20:31.960 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1668649 00:20:31.960 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1668649 ']' 00:20:31.960 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1668649 00:20:31.960 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:31.960 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.960 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1668649 00:20:31.960 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:31.960 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:31.960 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1668649' 00:20:31.960 killing process with pid 1668649 00:20:31.960 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1668649 00:20:31.960 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1668649 00:20:32.218 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:32.218 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:32.218 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:32.218 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:32.218 "subsystems": [ 00:20:32.218 { 00:20:32.218 "subsystem": "keyring", 00:20:32.218 "config": [ 00:20:32.218 { 00:20:32.218 "method": "keyring_file_add_key", 00:20:32.218 "params": { 00:20:32.218 "name": "key0", 00:20:32.218 "path": "/tmp/tmp.DeTIfxVeJ5" 00:20:32.218 } 00:20:32.218 } 00:20:32.218 ] 00:20:32.218 }, 00:20:32.218 { 00:20:32.218 "subsystem": "iobuf", 00:20:32.218 "config": [ 00:20:32.218 { 00:20:32.218 "method": "iobuf_set_options", 00:20:32.219 "params": { 00:20:32.219 "small_pool_count": 8192, 00:20:32.219 "large_pool_count": 1024, 00:20:32.219 "small_bufsize": 8192, 00:20:32.219 "large_bufsize": 135168, 00:20:32.219 "enable_numa": false 00:20:32.219 } 00:20:32.219 } 00:20:32.219 ] 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "subsystem": "sock", 00:20:32.219 "config": [ 00:20:32.219 { 00:20:32.219 "method": "sock_set_default_impl", 00:20:32.219 "params": { 00:20:32.219 "impl_name": "posix" 00:20:32.219 } 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "method": "sock_impl_set_options", 00:20:32.219 "params": { 00:20:32.219 "impl_name": "ssl", 00:20:32.219 "recv_buf_size": 4096, 00:20:32.219 "send_buf_size": 4096, 00:20:32.219 "enable_recv_pipe": true, 00:20:32.219 "enable_quickack": false, 00:20:32.219 "enable_placement_id": 0, 00:20:32.219 "enable_zerocopy_send_server": true, 00:20:32.219 "enable_zerocopy_send_client": false, 00:20:32.219 "zerocopy_threshold": 0, 00:20:32.219 "tls_version": 0, 00:20:32.219 "enable_ktls": false 00:20:32.219 } 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "method": "sock_impl_set_options", 00:20:32.219 "params": { 00:20:32.219 "impl_name": "posix", 00:20:32.219 "recv_buf_size": 2097152, 00:20:32.219 "send_buf_size": 2097152, 00:20:32.219 "enable_recv_pipe": true, 00:20:32.219 "enable_quickack": false, 00:20:32.219 "enable_placement_id": 0, 00:20:32.219 "enable_zerocopy_send_server": true, 00:20:32.219 "enable_zerocopy_send_client": false, 00:20:32.219 "zerocopy_threshold": 0, 00:20:32.219 "tls_version": 0, 00:20:32.219 "enable_ktls": false 00:20:32.219 } 00:20:32.219 } 00:20:32.219 ] 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "subsystem": "vmd", 00:20:32.219 "config": [] 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "subsystem": "accel", 00:20:32.219 "config": [ 00:20:32.219 { 00:20:32.219 "method": "accel_set_options", 00:20:32.219 "params": { 00:20:32.219 "small_cache_size": 128, 00:20:32.219 "large_cache_size": 16, 00:20:32.219 "task_count": 2048, 00:20:32.219 "sequence_count": 2048, 00:20:32.219 "buf_count": 2048 00:20:32.219 } 00:20:32.219 } 00:20:32.219 ] 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "subsystem": "bdev", 00:20:32.219 "config": [ 00:20:32.219 { 00:20:32.219 "method": "bdev_set_options", 00:20:32.219 "params": { 00:20:32.219 "bdev_io_pool_size": 65535, 00:20:32.219 "bdev_io_cache_size": 256, 00:20:32.219 "bdev_auto_examine": true, 00:20:32.219 "iobuf_small_cache_size": 128, 00:20:32.219 "iobuf_large_cache_size": 16 00:20:32.219 } 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "method": "bdev_raid_set_options", 00:20:32.219 "params": { 00:20:32.219 "process_window_size_kb": 1024, 00:20:32.219 "process_max_bandwidth_mb_sec": 0 00:20:32.219 } 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "method": "bdev_iscsi_set_options", 00:20:32.219 "params": { 00:20:32.219 "timeout_sec": 30 00:20:32.219 } 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "method": "bdev_nvme_set_options", 00:20:32.219 "params": { 00:20:32.219 "action_on_timeout": "none", 00:20:32.219 "timeout_us": 0, 00:20:32.219 "timeout_admin_us": 0, 00:20:32.219 "keep_alive_timeout_ms": 10000, 00:20:32.219 "arbitration_burst": 0, 00:20:32.219 "low_priority_weight": 0, 00:20:32.219 "medium_priority_weight": 0, 00:20:32.219 "high_priority_weight": 0, 00:20:32.219 "nvme_adminq_poll_period_us": 10000, 00:20:32.219 "nvme_ioq_poll_period_us": 0, 00:20:32.219 "io_queue_requests": 0, 00:20:32.219 "delay_cmd_submit": true, 00:20:32.219 "transport_retry_count": 4, 00:20:32.219 "bdev_retry_count": 3, 00:20:32.219 "transport_ack_timeout": 0, 00:20:32.219 "ctrlr_loss_timeout_sec": 0, 00:20:32.219 "reconnect_delay_sec": 0, 00:20:32.219 "fast_io_fail_timeout_sec": 0, 00:20:32.219 "disable_auto_failback": false, 00:20:32.219 "generate_uuids": false, 00:20:32.219 "transport_tos": 0, 00:20:32.219 "nvme_error_stat": false, 00:20:32.219 "rdma_srq_size": 0, 00:20:32.219 "io_path_stat": false, 00:20:32.219 "allow_accel_sequence": false, 00:20:32.219 "rdma_max_cq_size": 0, 00:20:32.219 "rdma_cm_event_timeout_ms": 0, 00:20:32.219 "dhchap_digests": [ 00:20:32.219 "sha256", 00:20:32.219 "sha384", 00:20:32.219 "sha512" 00:20:32.219 ], 00:20:32.219 "dhchap_dhgroups": [ 00:20:32.219 "null", 00:20:32.219 "ffdhe2048", 00:20:32.219 "ffdhe3072", 00:20:32.219 "ffdhe4096", 00:20:32.219 "ffdhe6144", 00:20:32.219 "ffdhe8192" 00:20:32.219 ] 00:20:32.219 } 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "method": "bdev_nvme_set_hotplug", 00:20:32.219 "params": { 00:20:32.219 "period_us": 100000, 00:20:32.219 "enable": false 00:20:32.219 } 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "method": "bdev_malloc_create", 00:20:32.219 "params": { 00:20:32.219 "name": "malloc0", 00:20:32.219 "num_blocks": 8192, 00:20:32.219 "block_size": 4096, 00:20:32.219 "physical_block_size": 4096, 00:20:32.219 "uuid": "446681b2-8ab0-4205-bb3f-d157bb20c5c2", 00:20:32.219 "optimal_io_boundary": 0, 00:20:32.219 "md_size": 0, 00:20:32.219 "dif_type": 0, 00:20:32.219 "dif_is_head_of_md": false, 00:20:32.219 "dif_pi_format": 0 00:20:32.219 } 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "method": "bdev_wait_for_examine" 00:20:32.219 } 00:20:32.219 ] 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "subsystem": "nbd", 00:20:32.219 "config": [] 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "subsystem": "scheduler", 00:20:32.219 "config": [ 00:20:32.219 { 00:20:32.219 "method": "framework_set_scheduler", 00:20:32.219 "params": { 00:20:32.219 "name": "static" 00:20:32.219 } 00:20:32.219 } 00:20:32.219 ] 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "subsystem": "nvmf", 00:20:32.219 "config": [ 00:20:32.219 { 00:20:32.219 "method": "nvmf_set_config", 00:20:32.219 "params": { 00:20:32.219 "discovery_filter": "match_any", 00:20:32.219 "admin_cmd_passthru": { 00:20:32.219 "identify_ctrlr": false 00:20:32.219 }, 00:20:32.219 "dhchap_digests": [ 00:20:32.219 "sha256", 00:20:32.219 "sha384", 00:20:32.219 "sha512" 00:20:32.219 ], 00:20:32.219 "dhchap_dhgroups": [ 00:20:32.219 "null", 00:20:32.219 "ffdhe2048", 00:20:32.219 "ffdhe3072", 00:20:32.219 "ffdhe4096", 00:20:32.219 "ffdhe6144", 00:20:32.219 "ffdhe8192" 00:20:32.219 ] 00:20:32.219 } 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "method": "nvmf_set_max_subsystems", 00:20:32.219 "params": { 00:20:32.219 "max_subsystems": 1024 00:20:32.219 } 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "method": "nvmf_set_crdt", 00:20:32.219 "params": { 00:20:32.219 "crdt1": 0, 00:20:32.219 "crdt2": 0, 00:20:32.219 "crdt3": 0 00:20:32.219 } 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "method": "nvmf_create_transport", 00:20:32.219 "params": { 00:20:32.219 "trtype": "TCP", 00:20:32.219 "max_queue_depth": 128, 00:20:32.219 "max_io_qpairs_per_ctrlr": 127, 00:20:32.219 "in_capsule_data_size": 4096, 00:20:32.219 "max_io_size": 131072, 00:20:32.219 "io_unit_size": 131072, 00:20:32.219 "max_aq_depth": 128, 00:20:32.219 "num_shared_buffers": 511, 00:20:32.219 "buf_cache_size": 4294967295, 00:20:32.219 "dif_insert_or_strip": false, 00:20:32.219 "zcopy": false, 00:20:32.219 "c2h_success": false, 00:20:32.219 "sock_priority": 0, 00:20:32.219 "abort_timeout_sec": 1, 00:20:32.219 "ack_timeout": 0, 00:20:32.219 "data_wr_pool_size": 0 00:20:32.219 } 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "method": "nvmf_create_subsystem", 00:20:32.219 "params": { 00:20:32.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.219 "allow_any_host": false, 00:20:32.219 "serial_number": "00000000000000000000", 00:20:32.219 "model_number": "SPDK bdev Controller", 00:20:32.219 "max_namespaces": 32, 00:20:32.219 "min_cntlid": 1, 00:20:32.219 "max_cntlid": 65519, 00:20:32.219 "ana_reporting": false 00:20:32.219 } 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "method": "nvmf_subsystem_add_host", 00:20:32.219 "params": { 00:20:32.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.219 "host": "nqn.2016-06.io.spdk:host1", 00:20:32.219 "psk": "key0" 00:20:32.219 } 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "method": "nvmf_subsystem_add_ns", 00:20:32.219 "params": { 00:20:32.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.219 "namespace": { 00:20:32.219 "nsid": 1, 00:20:32.219 "bdev_name": "malloc0", 00:20:32.219 "nguid": "446681B28AB04205BB3FD157BB20C5C2", 00:20:32.219 "uuid": "446681b2-8ab0-4205-bb3f-d157bb20c5c2", 00:20:32.219 "no_auto_visible": false 00:20:32.219 } 00:20:32.219 } 00:20:32.219 }, 00:20:32.219 { 00:20:32.219 "method": "nvmf_subsystem_add_listener", 00:20:32.219 "params": { 00:20:32.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.219 "listen_address": { 00:20:32.219 "trtype": "TCP", 00:20:32.219 "adrfam": "IPv4", 00:20:32.219 "traddr": "10.0.0.2", 00:20:32.219 "trsvcid": "4420" 00:20:32.219 }, 00:20:32.219 "secure_channel": false, 00:20:32.219 "sock_impl": "ssl" 00:20:32.219 } 00:20:32.219 } 00:20:32.219 ] 00:20:32.219 } 00:20:32.219 ] 00:20:32.219 }' 00:20:32.219 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.219 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1669142 00:20:32.219 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1669142 00:20:32.219 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:32.219 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1669142 ']' 00:20:32.219 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.219 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.219 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.219 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.219 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.219 [2024-12-10 14:22:32.899354] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:20:32.219 [2024-12-10 14:22:32.899399] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.478 [2024-12-10 14:22:32.981611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.478 [2024-12-10 14:22:33.020403] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.478 [2024-12-10 14:22:33.020438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.478 [2024-12-10 14:22:33.020445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.478 [2024-12-10 14:22:33.020451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.478 [2024-12-10 14:22:33.020456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.478 [2024-12-10 14:22:33.021001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.735 [2024-12-10 14:22:33.234119] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.735 [2024-12-10 14:22:33.266154] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:32.735 [2024-12-10 14:22:33.266373] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.993 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.993 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:32.993 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:32.993 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:32.993 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.252 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.252 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1669381 00:20:33.252 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1669381 /var/tmp/bdevperf.sock 00:20:33.252 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1669381 ']' 00:20:33.252 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.252 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:33.252 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.252 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.252 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:33.252 "subsystems": [ 00:20:33.252 { 00:20:33.252 "subsystem": "keyring", 00:20:33.252 "config": [ 00:20:33.252 { 00:20:33.252 "method": "keyring_file_add_key", 00:20:33.252 "params": { 00:20:33.252 "name": "key0", 00:20:33.252 "path": "/tmp/tmp.DeTIfxVeJ5" 00:20:33.252 } 00:20:33.252 } 00:20:33.252 ] 00:20:33.252 }, 00:20:33.252 { 00:20:33.252 "subsystem": "iobuf", 00:20:33.252 "config": [ 00:20:33.252 { 00:20:33.252 "method": "iobuf_set_options", 00:20:33.252 "params": { 00:20:33.252 "small_pool_count": 8192, 00:20:33.252 "large_pool_count": 1024, 00:20:33.252 "small_bufsize": 8192, 00:20:33.252 "large_bufsize": 135168, 00:20:33.252 "enable_numa": false 00:20:33.252 } 00:20:33.252 } 00:20:33.252 ] 00:20:33.252 }, 00:20:33.252 { 00:20:33.252 "subsystem": "sock", 00:20:33.252 "config": [ 00:20:33.252 { 00:20:33.252 "method": "sock_set_default_impl", 00:20:33.252 "params": { 00:20:33.252 "impl_name": "posix" 00:20:33.252 } 00:20:33.252 }, 00:20:33.252 { 00:20:33.252 "method": "sock_impl_set_options", 00:20:33.252 "params": { 00:20:33.252 "impl_name": "ssl", 00:20:33.252 "recv_buf_size": 4096, 00:20:33.252 "send_buf_size": 4096, 00:20:33.252 "enable_recv_pipe": true, 00:20:33.252 "enable_quickack": false, 00:20:33.252 "enable_placement_id": 0, 00:20:33.252 "enable_zerocopy_send_server": true, 00:20:33.252 "enable_zerocopy_send_client": false, 00:20:33.252 "zerocopy_threshold": 0, 00:20:33.252 "tls_version": 0, 00:20:33.252 "enable_ktls": false 00:20:33.252 } 00:20:33.252 }, 00:20:33.252 { 00:20:33.252 "method": "sock_impl_set_options", 00:20:33.252 "params": { 00:20:33.252 "impl_name": "posix", 00:20:33.252 "recv_buf_size": 2097152, 00:20:33.252 "send_buf_size": 2097152, 00:20:33.252 "enable_recv_pipe": true, 00:20:33.252 "enable_quickack": false, 00:20:33.252 "enable_placement_id": 0, 00:20:33.252 "enable_zerocopy_send_server": true, 00:20:33.252 "enable_zerocopy_send_client": false, 00:20:33.252 "zerocopy_threshold": 0, 00:20:33.252 "tls_version": 0, 00:20:33.252 "enable_ktls": false 00:20:33.252 } 00:20:33.252 } 00:20:33.252 ] 00:20:33.252 }, 00:20:33.252 { 00:20:33.252 "subsystem": "vmd", 00:20:33.252 "config": [] 00:20:33.252 }, 00:20:33.252 { 00:20:33.252 "subsystem": "accel", 00:20:33.252 "config": [ 00:20:33.252 { 00:20:33.252 "method": "accel_set_options", 00:20:33.252 "params": { 00:20:33.252 "small_cache_size": 128, 00:20:33.252 "large_cache_size": 16, 00:20:33.252 "task_count": 2048, 00:20:33.252 "sequence_count": 2048, 00:20:33.252 "buf_count": 2048 00:20:33.252 } 00:20:33.252 } 00:20:33.252 ] 00:20:33.252 }, 00:20:33.252 { 00:20:33.252 "subsystem": "bdev", 00:20:33.252 "config": [ 00:20:33.252 { 00:20:33.252 "method": "bdev_set_options", 00:20:33.252 "params": { 00:20:33.252 "bdev_io_pool_size": 65535, 00:20:33.252 "bdev_io_cache_size": 256, 00:20:33.252 "bdev_auto_examine": true, 00:20:33.252 "iobuf_small_cache_size": 128, 00:20:33.252 "iobuf_large_cache_size": 16 00:20:33.252 } 00:20:33.252 }, 00:20:33.252 { 00:20:33.252 "method": "bdev_raid_set_options", 00:20:33.252 "params": { 00:20:33.252 "process_window_size_kb": 1024, 00:20:33.252 "process_max_bandwidth_mb_sec": 0 00:20:33.252 } 00:20:33.252 }, 00:20:33.252 { 00:20:33.252 "method": "bdev_iscsi_set_options", 00:20:33.252 "params": { 00:20:33.252 "timeout_sec": 30 00:20:33.252 } 00:20:33.252 }, 00:20:33.252 { 00:20:33.252 "method": "bdev_nvme_set_options", 00:20:33.252 "params": { 00:20:33.252 "action_on_timeout": "none", 00:20:33.252 "timeout_us": 0, 00:20:33.252 "timeout_admin_us": 0, 00:20:33.252 "keep_alive_timeout_ms": 10000, 00:20:33.252 "arbitration_burst": 0, 00:20:33.252 "low_priority_weight": 0, 00:20:33.252 "medium_priority_weight": 0, 00:20:33.252 "high_priority_weight": 0, 00:20:33.252 "nvme_adminq_poll_period_us": 10000, 00:20:33.252 "nvme_ioq_poll_period_us": 0, 00:20:33.252 "io_queue_requests": 512, 00:20:33.252 "delay_cmd_submit": true, 00:20:33.252 "transport_retry_count": 4, 00:20:33.252 "bdev_retry_count": 3, 00:20:33.252 "transport_ack_timeout": 0, 00:20:33.252 "ctrlr_loss_timeout_sec": 0, 00:20:33.252 "reconnect_delay_sec": 0, 00:20:33.252 "fast_io_fail_timeout_sec": 0, 00:20:33.252 "disable_auto_failback": false, 00:20:33.252 "generate_uuids": false, 00:20:33.252 "transport_tos": 0, 00:20:33.252 "nvme_error_stat": false, 00:20:33.252 "rdma_srq_size": 0, 00:20:33.252 "io_path_stat": false, 00:20:33.252 "allow_accel_sequence": false, 00:20:33.252 "rdma_max_cq_size": 0, 00:20:33.252 "rdma_cm_event_timeout_ms": 0, 00:20:33.252 "dhchap_digests": [ 00:20:33.252 "sha256", 00:20:33.252 "sha384", 00:20:33.252 "sha512" 00:20:33.252 ], 00:20:33.252 "dhchap_dhgroups": [ 00:20:33.252 "null", 00:20:33.252 "ffdhe2048", 00:20:33.252 "ffdhe3072", 00:20:33.252 "ffdhe4096", 00:20:33.252 "ffdhe6144", 00:20:33.252 "ffdhe8192" 00:20:33.252 ] 00:20:33.252 } 00:20:33.252 }, 00:20:33.252 { 00:20:33.252 "method": "bdev_nvme_attach_controller", 00:20:33.252 "params": { 00:20:33.252 "name": "nvme0", 00:20:33.252 "trtype": "TCP", 00:20:33.252 "adrfam": "IPv4", 00:20:33.252 "traddr": "10.0.0.2", 00:20:33.252 "trsvcid": "4420", 00:20:33.252 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.252 "prchk_reftag": false, 00:20:33.252 "prchk_guard": false, 00:20:33.252 "ctrlr_loss_timeout_sec": 0, 00:20:33.252 "reconnect_delay_sec": 0, 00:20:33.252 "fast_io_fail_timeout_sec": 0, 00:20:33.252 "psk": "key0", 00:20:33.252 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:33.252 "hdgst": false, 00:20:33.252 "ddgst": false, 00:20:33.252 "multipath": "multipath" 00:20:33.252 } 00:20:33.252 }, 00:20:33.252 { 00:20:33.252 "method": "bdev_nvme_set_hotplug", 00:20:33.252 "params": { 00:20:33.252 "period_us": 100000, 00:20:33.252 "enable": false 00:20:33.252 } 00:20:33.252 }, 00:20:33.252 { 00:20:33.252 "method": "bdev_enable_histogram", 00:20:33.252 "params": { 00:20:33.252 "name": "nvme0n1", 00:20:33.252 "enable": true 00:20:33.252 } 00:20:33.252 }, 00:20:33.252 { 00:20:33.252 "method": "bdev_wait_for_examine" 00:20:33.252 } 00:20:33.252 ] 00:20:33.252 }, 00:20:33.252 { 00:20:33.252 "subsystem": "nbd", 00:20:33.252 "config": [] 00:20:33.252 } 00:20:33.252 ] 00:20:33.252 }' 00:20:33.252 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.252 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.252 [2024-12-10 14:22:33.815602] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:20:33.253 [2024-12-10 14:22:33.815646] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1669381 ] 00:20:33.253 [2024-12-10 14:22:33.892932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.253 [2024-12-10 14:22:33.931685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.510 [2024-12-10 14:22:34.084994] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:34.076 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.076 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:34.076 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:34.076 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:34.335 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.335 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:34.335 Running I/O for 1 seconds... 00:20:35.266 5391.00 IOPS, 21.06 MiB/s 00:20:35.266 Latency(us) 00:20:35.266 [2024-12-10T13:22:36.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.266 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:35.266 Verification LBA range: start 0x0 length 0x2000 00:20:35.266 nvme0n1 : 1.02 5431.84 21.22 0.00 0.00 23385.16 4712.35 30208.98 00:20:35.266 [2024-12-10T13:22:36.006Z] =================================================================================================================== 00:20:35.266 [2024-12-10T13:22:36.006Z] Total : 5431.84 21.22 0.00 0.00 23385.16 4712.35 30208.98 00:20:35.266 { 00:20:35.266 "results": [ 00:20:35.266 { 00:20:35.267 "job": "nvme0n1", 00:20:35.267 "core_mask": "0x2", 00:20:35.267 "workload": "verify", 00:20:35.267 "status": "finished", 00:20:35.267 "verify_range": { 00:20:35.267 "start": 0, 00:20:35.267 "length": 8192 00:20:35.267 }, 00:20:35.267 "queue_depth": 128, 00:20:35.267 "io_size": 4096, 00:20:35.267 "runtime": 1.016046, 00:20:35.267 "iops": 5431.840684378463, 00:20:35.267 "mibps": 21.218127673353372, 00:20:35.267 "io_failed": 0, 00:20:35.267 "io_timeout": 0, 00:20:35.267 "avg_latency_us": 23385.162719954445, 00:20:35.267 "min_latency_us": 4712.350476190476, 00:20:35.267 "max_latency_us": 30208.975238095238 00:20:35.267 } 00:20:35.267 ], 00:20:35.267 "core_count": 1 00:20:35.267 } 00:20:35.267 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:35.267 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:35.267 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:35.267 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:35.267 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:35.267 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:35.267 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:35.525 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:35.525 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:35.525 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:35.525 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:35.525 nvmf_trace.0 00:20:35.525 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:35.525 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1669381 00:20:35.525 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1669381 ']' 00:20:35.525 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1669381 00:20:35.525 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:35.525 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.525 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1669381 00:20:35.525 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:35.525 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:35.525 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1669381' 00:20:35.525 killing process with pid 1669381 00:20:35.525 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1669381 00:20:35.525 Received shutdown signal, test time was about 1.000000 seconds 00:20:35.525 00:20:35.525 Latency(us) 00:20:35.525 [2024-12-10T13:22:36.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.525 [2024-12-10T13:22:36.265Z] =================================================================================================================== 00:20:35.525 [2024-12-10T13:22:36.265Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.525 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1669381 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:35.783 rmmod nvme_tcp 00:20:35.783 rmmod nvme_fabrics 00:20:35.783 rmmod nvme_keyring 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1669142 ']' 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1669142 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1669142 ']' 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1669142 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1669142 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1669142' 00:20:35.783 killing process with pid 1669142 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1669142 00:20:35.783 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1669142 00:20:36.042 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:36.042 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:36.042 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:36.042 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:36.042 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:36.042 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:36.042 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:36.042 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:36.042 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:36.042 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.042 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.042 14:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.947 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:37.947 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.XSkJgbsAxU /tmp/tmp.LuHqK89Fd8 /tmp/tmp.DeTIfxVeJ5 00:20:37.947 00:20:37.947 real 1m21.090s 00:20:37.947 user 2m3.782s 00:20:37.947 sys 0m30.960s 00:20:37.947 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.947 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.947 ************************************ 00:20:37.947 END TEST nvmf_tls 00:20:37.947 ************************************ 00:20:37.947 14:22:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:37.947 14:22:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:37.947 14:22:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:37.947 14:22:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:38.206 ************************************ 00:20:38.206 START TEST nvmf_fips 00:20:38.206 ************************************ 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:38.206 * Looking for test storage... 00:20:38.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:38.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.206 --rc genhtml_branch_coverage=1 00:20:38.206 --rc genhtml_function_coverage=1 00:20:38.206 --rc genhtml_legend=1 00:20:38.206 --rc geninfo_all_blocks=1 00:20:38.206 --rc geninfo_unexecuted_blocks=1 00:20:38.206 00:20:38.206 ' 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:38.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.206 --rc genhtml_branch_coverage=1 00:20:38.206 --rc genhtml_function_coverage=1 00:20:38.206 --rc genhtml_legend=1 00:20:38.206 --rc geninfo_all_blocks=1 00:20:38.206 --rc geninfo_unexecuted_blocks=1 00:20:38.206 00:20:38.206 ' 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:38.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.206 --rc genhtml_branch_coverage=1 00:20:38.206 --rc genhtml_function_coverage=1 00:20:38.206 --rc genhtml_legend=1 00:20:38.206 --rc geninfo_all_blocks=1 00:20:38.206 --rc geninfo_unexecuted_blocks=1 00:20:38.206 00:20:38.206 ' 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:38.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.206 --rc genhtml_branch_coverage=1 00:20:38.206 --rc genhtml_function_coverage=1 00:20:38.206 --rc genhtml_legend=1 00:20:38.206 --rc geninfo_all_blocks=1 00:20:38.206 --rc geninfo_unexecuted_blocks=1 00:20:38.206 00:20:38.206 ' 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:38.206 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:38.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:38.207 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:38.464 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:38.464 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:38.464 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.464 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:38.465 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:38.465 Error setting digest 00:20:38.465 40B20C6F367F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:38.465 40B20C6F367F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:38.465 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:45.028 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:45.028 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:45.028 Found net devices under 0000:af:00.0: cvl_0_0 00:20:45.028 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:45.029 Found net devices under 0000:af:00.1: cvl_0_1 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:45.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:20:45.029 00:20:45.029 --- 10.0.0.2 ping statistics --- 00:20:45.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.029 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:45.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:20:45.029 00:20:45.029 --- 10.0.0.1 ping statistics --- 00:20:45.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.029 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:45.029 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:45.287 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:45.287 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:45.287 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.287 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:45.287 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1673655 00:20:45.287 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:45.287 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1673655 00:20:45.287 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1673655 ']' 00:20:45.287 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.287 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.287 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.287 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.287 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:45.287 [2024-12-10 14:22:45.874495] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:20:45.287 [2024-12-10 14:22:45.874541] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.287 [2024-12-10 14:22:45.960279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.287 [2024-12-10 14:22:46.003356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.287 [2024-12-10 14:22:46.003385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.287 [2024-12-10 14:22:46.003393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.287 [2024-12-10 14:22:46.003399] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.287 [2024-12-10 14:22:46.003407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.287 [2024-12-10 14:22:46.003928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.222 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.222 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:46.222 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:46.222 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:46.222 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:46.222 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.222 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:46.222 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:46.222 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:46.222 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Nxt 00:20:46.222 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:46.222 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Nxt 00:20:46.222 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Nxt 00:20:46.222 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Nxt 00:20:46.222 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:46.222 [2024-12-10 14:22:46.909937] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.222 [2024-12-10 14:22:46.925941] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:46.222 [2024-12-10 14:22:46.926104] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.480 malloc0 00:20:46.480 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:46.480 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1673906 00:20:46.480 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:46.480 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1673906 /var/tmp/bdevperf.sock 00:20:46.480 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1673906 ']' 00:20:46.480 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.480 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.480 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.480 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.480 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:46.480 [2024-12-10 14:22:47.058078] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:20:46.480 [2024-12-10 14:22:47.058128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1673906 ] 00:20:46.480 [2024-12-10 14:22:47.134865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.480 [2024-12-10 14:22:47.175808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.486 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.486 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:47.486 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Nxt 00:20:47.486 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:47.755 [2024-12-10 14:22:48.237884] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.755 TLSTESTn1 00:20:47.755 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:47.755 Running I/O for 10 seconds... 00:20:50.067 5487.00 IOPS, 21.43 MiB/s [2024-12-10T13:22:51.741Z] 5591.00 IOPS, 21.84 MiB/s [2024-12-10T13:22:52.675Z] 5601.00 IOPS, 21.88 MiB/s [2024-12-10T13:22:53.608Z] 5627.50 IOPS, 21.98 MiB/s [2024-12-10T13:22:54.542Z] 5645.60 IOPS, 22.05 MiB/s [2024-12-10T13:22:55.475Z] 5635.83 IOPS, 22.01 MiB/s [2024-12-10T13:22:56.849Z] 5642.00 IOPS, 22.04 MiB/s [2024-12-10T13:22:57.783Z] 5633.50 IOPS, 22.01 MiB/s [2024-12-10T13:22:58.717Z] 5627.89 IOPS, 21.98 MiB/s [2024-12-10T13:22:58.717Z] 5613.70 IOPS, 21.93 MiB/s 00:20:57.977 Latency(us) 00:20:57.977 [2024-12-10T13:22:58.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.977 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:57.977 Verification LBA range: start 0x0 length 0x2000 00:20:57.977 TLSTESTn1 : 10.01 5619.72 21.95 0.00 0.00 22744.46 5242.88 23842.62 00:20:57.977 [2024-12-10T13:22:58.717Z] =================================================================================================================== 00:20:57.977 [2024-12-10T13:22:58.717Z] Total : 5619.72 21.95 0.00 0.00 22744.46 5242.88 23842.62 00:20:57.977 { 00:20:57.977 "results": [ 00:20:57.977 { 00:20:57.977 "job": "TLSTESTn1", 00:20:57.977 "core_mask": "0x4", 00:20:57.977 "workload": "verify", 00:20:57.977 "status": "finished", 00:20:57.977 "verify_range": { 00:20:57.977 "start": 0, 00:20:57.977 "length": 8192 00:20:57.977 }, 00:20:57.977 "queue_depth": 128, 00:20:57.977 "io_size": 4096, 00:20:57.977 "runtime": 10.011881, 00:20:57.977 "iops": 5619.723206857932, 00:20:57.977 "mibps": 21.9520437767888, 00:20:57.977 "io_failed": 0, 00:20:57.977 "io_timeout": 0, 00:20:57.977 "avg_latency_us": 22744.461257337858, 00:20:57.977 "min_latency_us": 5242.88, 00:20:57.977 "max_latency_us": 23842.620952380952 00:20:57.977 } 00:20:57.977 ], 00:20:57.977 "core_count": 1 00:20:57.977 } 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:57.977 nvmf_trace.0 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1673906 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1673906 ']' 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1673906 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1673906 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1673906' 00:20:57.977 killing process with pid 1673906 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1673906 00:20:57.977 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.977 00:20:57.977 Latency(us) 00:20:57.977 [2024-12-10T13:22:58.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.977 [2024-12-10T13:22:58.717Z] =================================================================================================================== 00:20:57.977 [2024-12-10T13:22:58.717Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.977 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1673906 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:58.237 rmmod nvme_tcp 00:20:58.237 rmmod nvme_fabrics 00:20:58.237 rmmod nvme_keyring 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1673655 ']' 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1673655 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1673655 ']' 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1673655 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1673655 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1673655' 00:20:58.237 killing process with pid 1673655 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1673655 00:20:58.237 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1673655 00:20:58.496 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:58.496 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:58.496 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:58.496 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:58.496 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:58.496 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:58.496 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:58.496 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:58.496 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:58.496 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.496 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.496 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.401 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:00.401 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Nxt 00:21:00.660 00:21:00.660 real 0m22.419s 00:21:00.660 user 0m23.582s 00:21:00.660 sys 0m10.262s 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:00.660 ************************************ 00:21:00.660 END TEST nvmf_fips 00:21:00.660 ************************************ 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:00.660 ************************************ 00:21:00.660 START TEST nvmf_control_msg_list 00:21:00.660 ************************************ 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:00.660 * Looking for test storage... 00:21:00.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:00.660 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:00.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.661 --rc genhtml_branch_coverage=1 00:21:00.661 --rc genhtml_function_coverage=1 00:21:00.661 --rc genhtml_legend=1 00:21:00.661 --rc geninfo_all_blocks=1 00:21:00.661 --rc geninfo_unexecuted_blocks=1 00:21:00.661 00:21:00.661 ' 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:00.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.661 --rc genhtml_branch_coverage=1 00:21:00.661 --rc genhtml_function_coverage=1 00:21:00.661 --rc genhtml_legend=1 00:21:00.661 --rc geninfo_all_blocks=1 00:21:00.661 --rc geninfo_unexecuted_blocks=1 00:21:00.661 00:21:00.661 ' 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:00.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.661 --rc genhtml_branch_coverage=1 00:21:00.661 --rc genhtml_function_coverage=1 00:21:00.661 --rc genhtml_legend=1 00:21:00.661 --rc geninfo_all_blocks=1 00:21:00.661 --rc geninfo_unexecuted_blocks=1 00:21:00.661 00:21:00.661 ' 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:00.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.661 --rc genhtml_branch_coverage=1 00:21:00.661 --rc genhtml_function_coverage=1 00:21:00.661 --rc genhtml_legend=1 00:21:00.661 --rc geninfo_all_blocks=1 00:21:00.661 --rc geninfo_unexecuted_blocks=1 00:21:00.661 00:21:00.661 ' 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.661 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.922 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:00.922 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:00.922 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.922 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.922 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.922 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.922 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.922 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:00.922 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:00.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:00.923 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:07.492 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.492 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:07.492 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:07.492 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:07.492 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:07.492 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:07.492 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:07.492 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:07.493 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:07.493 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:07.493 Found net devices under 0000:af:00.0: cvl_0_0 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:07.493 Found net devices under 0000:af:00.1: cvl_0_1 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.493 14:23:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.493 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.493 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.493 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:07.493 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.493 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.493 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.493 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:07.493 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:07.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:21:07.493 00:21:07.493 --- 10.0.0.2 ping statistics --- 00:21:07.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.493 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:21:07.493 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:21:07.493 00:21:07.493 --- 10.0.0.1 ping statistics --- 00:21:07.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.493 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1679729 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1679729 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1679729 ']' 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.494 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:07.752 [2024-12-10 14:23:08.234749] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:21:07.752 [2024-12-10 14:23:08.234793] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.752 [2024-12-10 14:23:08.317495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.752 [2024-12-10 14:23:08.356330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.752 [2024-12-10 14:23:08.356363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.752 [2024-12-10 14:23:08.356370] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.752 [2024-12-10 14:23:08.356375] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.752 [2024-12-10 14:23:08.356380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.752 [2024-12-10 14:23:08.356911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.752 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.752 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:07.753 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:07.753 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:07.753 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:07.753 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.753 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:07.753 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:07.753 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:07.753 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.753 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:07.753 [2024-12-10 14:23:08.491222] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:08.011 Malloc0 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:08.011 [2024-12-10 14:23:08.531485] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1679762 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1679764 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1679765 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1679762 00:21:08.011 14:23:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:08.011 [2024-12-10 14:23:08.620135] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:08.011 [2024-12-10 14:23:08.620338] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:08.011 [2024-12-10 14:23:08.620522] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:09.386 Initializing NVMe Controllers 00:21:09.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:09.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:09.386 Initialization complete. Launching workers. 00:21:09.386 ======================================================== 00:21:09.386 Latency(us) 00:21:09.386 Device Information : IOPS MiB/s Average min max 00:21:09.386 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5651.00 22.07 176.60 129.61 470.39 00:21:09.386 ======================================================== 00:21:09.386 Total : 5651.00 22.07 176.60 129.61 470.39 00:21:09.386 00:21:09.386 Initializing NVMe Controllers 00:21:09.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:09.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:09.386 Initialization complete. Launching workers. 00:21:09.386 ======================================================== 00:21:09.386 Latency(us) 00:21:09.386 Device Information : IOPS MiB/s Average min max 00:21:09.386 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 5751.00 22.46 173.52 128.93 598.76 00:21:09.386 ======================================================== 00:21:09.386 Total : 5751.00 22.46 173.52 128.93 598.76 00:21:09.386 00:21:09.386 Initializing NVMe Controllers 00:21:09.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:09.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:09.386 Initialization complete. Launching workers. 00:21:09.386 ======================================================== 00:21:09.386 Latency(us) 00:21:09.386 Device Information : IOPS MiB/s Average min max 00:21:09.386 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40952.77 40793.25 41893.50 00:21:09.386 ======================================================== 00:21:09.386 Total : 25.00 0.10 40952.77 40793.25 41893.50 00:21:09.386 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1679764 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1679765 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:09.386 rmmod nvme_tcp 00:21:09.386 rmmod nvme_fabrics 00:21:09.386 rmmod nvme_keyring 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1679729 ']' 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1679729 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1679729 ']' 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1679729 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1679729 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1679729' 00:21:09.386 killing process with pid 1679729 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1679729 00:21:09.386 14:23:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1679729 00:21:09.386 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:09.386 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:09.386 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:09.386 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:09.386 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:09.386 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:09.386 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:09.386 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:09.386 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:09.386 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.386 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.386 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:11.921 00:21:11.921 real 0m10.950s 00:21:11.921 user 0m6.817s 00:21:11.921 sys 0m6.127s 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:11.921 ************************************ 00:21:11.921 END TEST nvmf_control_msg_list 00:21:11.921 ************************************ 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:11.921 ************************************ 00:21:11.921 START TEST nvmf_wait_for_buf 00:21:11.921 ************************************ 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:11.921 * Looking for test storage... 00:21:11.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:11.921 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:11.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.922 --rc genhtml_branch_coverage=1 00:21:11.922 --rc genhtml_function_coverage=1 00:21:11.922 --rc genhtml_legend=1 00:21:11.922 --rc geninfo_all_blocks=1 00:21:11.922 --rc geninfo_unexecuted_blocks=1 00:21:11.922 00:21:11.922 ' 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:11.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.922 --rc genhtml_branch_coverage=1 00:21:11.922 --rc genhtml_function_coverage=1 00:21:11.922 --rc genhtml_legend=1 00:21:11.922 --rc geninfo_all_blocks=1 00:21:11.922 --rc geninfo_unexecuted_blocks=1 00:21:11.922 00:21:11.922 ' 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:11.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.922 --rc genhtml_branch_coverage=1 00:21:11.922 --rc genhtml_function_coverage=1 00:21:11.922 --rc genhtml_legend=1 00:21:11.922 --rc geninfo_all_blocks=1 00:21:11.922 --rc geninfo_unexecuted_blocks=1 00:21:11.922 00:21:11.922 ' 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:11.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.922 --rc genhtml_branch_coverage=1 00:21:11.922 --rc genhtml_function_coverage=1 00:21:11.922 --rc genhtml_legend=1 00:21:11.922 --rc geninfo_all_blocks=1 00:21:11.922 --rc geninfo_unexecuted_blocks=1 00:21:11.922 00:21:11.922 ' 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:11.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:11.922 14:23:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:18.490 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:18.491 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:18.491 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:18.491 Found net devices under 0000:af:00.0: cvl_0_0 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:18.491 Found net devices under 0000:af:00.1: cvl_0_1 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.491 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:18.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:21:18.491 00:21:18.491 --- 10.0.0.2 ping statistics --- 00:21:18.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.491 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:21:18.491 00:21:18.491 --- 10.0.0.1 ping statistics --- 00:21:18.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.491 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.491 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:18.748 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1683977 00:21:18.748 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:18.748 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1683977 00:21:18.748 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1683977 ']' 00:21:18.748 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.748 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.748 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.748 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.748 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:18.748 [2024-12-10 14:23:19.286259] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:21:18.748 [2024-12-10 14:23:19.286303] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.748 [2024-12-10 14:23:19.369536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.748 [2024-12-10 14:23:19.407994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.748 [2024-12-10 14:23:19.408030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.748 [2024-12-10 14:23:19.408037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.748 [2024-12-10 14:23:19.408043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.748 [2024-12-10 14:23:19.408048] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.748 [2024-12-10 14:23:19.408595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.681 Malloc0 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.681 [2024-12-10 14:23:20.249075] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.681 [2024-12-10 14:23:20.277262] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.681 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:19.681 [2024-12-10 14:23:20.364286] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:21.054 Initializing NVMe Controllers 00:21:21.054 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:21.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:21.054 Initialization complete. Launching workers. 00:21:21.054 ======================================================== 00:21:21.054 Latency(us) 00:21:21.054 Device Information : IOPS MiB/s Average min max 00:21:21.054 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.55 16.07 32208.23 7282.00 63853.54 00:21:21.054 ======================================================== 00:21:21.054 Total : 128.55 16.07 32208.23 7282.00 63853.54 00:21:21.054 00:21:21.054 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:21.054 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:21.054 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.054 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:21.312 rmmod nvme_tcp 00:21:21.312 rmmod nvme_fabrics 00:21:21.312 rmmod nvme_keyring 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1683977 ']' 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1683977 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1683977 ']' 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1683977 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1683977 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1683977' 00:21:21.312 killing process with pid 1683977 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1683977 00:21:21.312 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1683977 00:21:21.571 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:21.571 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:21.571 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:21.571 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:21.571 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:21.571 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:21.571 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:21.571 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:21.571 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:21.571 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.571 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.571 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.476 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:23.476 00:21:23.476 real 0m11.918s 00:21:23.476 user 0m4.969s 00:21:23.476 sys 0m5.572s 00:21:23.476 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.476 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.476 ************************************ 00:21:23.476 END TEST nvmf_wait_for_buf 00:21:23.476 ************************************ 00:21:23.476 14:23:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:23.476 14:23:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:23.476 14:23:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:23.476 14:23:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:23.476 14:23:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:23.477 14:23:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:30.041 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:30.041 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:30.041 Found net devices under 0000:af:00.0: cvl_0_0 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:30.041 Found net devices under 0000:af:00.1: cvl_0_1 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:30.041 ************************************ 00:21:30.041 START TEST nvmf_perf_adq 00:21:30.041 ************************************ 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:30.041 * Looking for test storage... 00:21:30.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:21:30.041 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:30.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.300 --rc genhtml_branch_coverage=1 00:21:30.300 --rc genhtml_function_coverage=1 00:21:30.300 --rc genhtml_legend=1 00:21:30.300 --rc geninfo_all_blocks=1 00:21:30.300 --rc geninfo_unexecuted_blocks=1 00:21:30.300 00:21:30.300 ' 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:30.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.300 --rc genhtml_branch_coverage=1 00:21:30.300 --rc genhtml_function_coverage=1 00:21:30.300 --rc genhtml_legend=1 00:21:30.300 --rc geninfo_all_blocks=1 00:21:30.300 --rc geninfo_unexecuted_blocks=1 00:21:30.300 00:21:30.300 ' 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:30.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.300 --rc genhtml_branch_coverage=1 00:21:30.300 --rc genhtml_function_coverage=1 00:21:30.300 --rc genhtml_legend=1 00:21:30.300 --rc geninfo_all_blocks=1 00:21:30.300 --rc geninfo_unexecuted_blocks=1 00:21:30.300 00:21:30.300 ' 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:30.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.300 --rc genhtml_branch_coverage=1 00:21:30.300 --rc genhtml_function_coverage=1 00:21:30.300 --rc genhtml_legend=1 00:21:30.300 --rc geninfo_all_blocks=1 00:21:30.300 --rc geninfo_unexecuted_blocks=1 00:21:30.300 00:21:30.300 ' 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:30.300 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:30.301 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:30.301 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:30.301 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:30.301 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:30.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:30.301 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:30.301 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:30.301 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:30.301 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:30.301 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:30.301 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:36.863 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:36.863 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:36.863 Found net devices under 0000:af:00.0: cvl_0_0 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.863 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:36.864 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.864 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:36.864 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:36.864 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.864 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:36.864 Found net devices under 0000:af:00.1: cvl_0_1 00:21:36.864 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.864 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:36.864 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:36.864 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:36.864 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:36.864 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:36.864 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:36.864 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:38.239 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:40.775 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.044 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:46.045 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:46.045 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:46.045 Found net devices under 0000:af:00.0: cvl_0_0 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:46.045 Found net devices under 0000:af:00.1: cvl_0_1 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:46.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=1.26 ms 00:21:46.045 00:21:46.045 --- 10.0.0.2 ping statistics --- 00:21:46.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.045 rtt min/avg/max/mdev = 1.258/1.258/1.258/0.000 ms 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:21:46.045 00:21:46.045 --- 10.0.0.1 ping statistics --- 00:21:46.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.045 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:46.045 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.304 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1693379 00:21:46.304 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1693379 00:21:46.304 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:46.304 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1693379 ']' 00:21:46.304 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.304 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.304 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.304 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.304 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.304 [2024-12-10 14:23:46.836946] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:21:46.304 [2024-12-10 14:23:46.836999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.304 [2024-12-10 14:23:46.920875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:46.304 [2024-12-10 14:23:46.963814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.304 [2024-12-10 14:23:46.963852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.304 [2024-12-10 14:23:46.963860] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.304 [2024-12-10 14:23:46.963865] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.304 [2024-12-10 14:23:46.963870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.304 [2024-12-10 14:23:46.965476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.304 [2024-12-10 14:23:46.965593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.304 [2024-12-10 14:23:46.965719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.304 [2024-12-10 14:23:46.965721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.239 [2024-12-10 14:23:47.836261] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.239 Malloc1 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.239 [2024-12-10 14:23:47.900104] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1693619 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:47.239 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:49.766 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:49.766 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.766 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.766 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.766 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:49.766 "tick_rate": 2100000000, 00:21:49.766 "poll_groups": [ 00:21:49.766 { 00:21:49.766 "name": "nvmf_tgt_poll_group_000", 00:21:49.766 "admin_qpairs": 1, 00:21:49.766 "io_qpairs": 1, 00:21:49.766 "current_admin_qpairs": 1, 00:21:49.766 "current_io_qpairs": 1, 00:21:49.766 "pending_bdev_io": 0, 00:21:49.766 "completed_nvme_io": 20609, 00:21:49.766 "transports": [ 00:21:49.766 { 00:21:49.766 "trtype": "TCP" 00:21:49.766 } 00:21:49.766 ] 00:21:49.766 }, 00:21:49.766 { 00:21:49.766 "name": "nvmf_tgt_poll_group_001", 00:21:49.766 "admin_qpairs": 0, 00:21:49.766 "io_qpairs": 1, 00:21:49.766 "current_admin_qpairs": 0, 00:21:49.766 "current_io_qpairs": 1, 00:21:49.766 "pending_bdev_io": 0, 00:21:49.766 "completed_nvme_io": 21047, 00:21:49.766 "transports": [ 00:21:49.766 { 00:21:49.766 "trtype": "TCP" 00:21:49.766 } 00:21:49.766 ] 00:21:49.766 }, 00:21:49.766 { 00:21:49.766 "name": "nvmf_tgt_poll_group_002", 00:21:49.766 "admin_qpairs": 0, 00:21:49.766 "io_qpairs": 1, 00:21:49.766 "current_admin_qpairs": 0, 00:21:49.766 "current_io_qpairs": 1, 00:21:49.766 "pending_bdev_io": 0, 00:21:49.766 "completed_nvme_io": 20695, 00:21:49.766 "transports": [ 00:21:49.766 { 00:21:49.766 "trtype": "TCP" 00:21:49.766 } 00:21:49.767 ] 00:21:49.767 }, 00:21:49.767 { 00:21:49.767 "name": "nvmf_tgt_poll_group_003", 00:21:49.767 "admin_qpairs": 0, 00:21:49.767 "io_qpairs": 1, 00:21:49.767 "current_admin_qpairs": 0, 00:21:49.767 "current_io_qpairs": 1, 00:21:49.767 "pending_bdev_io": 0, 00:21:49.767 "completed_nvme_io": 20484, 00:21:49.767 "transports": [ 00:21:49.767 { 00:21:49.767 "trtype": "TCP" 00:21:49.767 } 00:21:49.767 ] 00:21:49.767 } 00:21:49.767 ] 00:21:49.767 }' 00:21:49.767 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:49.767 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:49.767 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:49.767 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:49.767 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1693619 00:21:58.051 Initializing NVMe Controllers 00:21:58.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:58.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:58.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:58.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:58.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:58.051 Initialization complete. Launching workers. 00:21:58.051 ======================================================== 00:21:58.051 Latency(us) 00:21:58.051 Device Information : IOPS MiB/s Average min max 00:21:58.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10675.30 41.70 5994.96 2350.26 9995.10 00:21:58.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10966.60 42.84 5835.53 2160.28 10460.96 00:21:58.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10740.40 41.95 5959.86 2230.09 12474.59 00:21:58.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10800.80 42.19 5925.15 2225.93 10357.33 00:21:58.051 ======================================================== 00:21:58.051 Total : 43183.10 168.68 5928.28 2160.28 12474.59 00:21:58.051 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.051 rmmod nvme_tcp 00:21:58.051 rmmod nvme_fabrics 00:21:58.051 rmmod nvme_keyring 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1693379 ']' 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1693379 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1693379 ']' 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1693379 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1693379 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1693379' 00:21:58.051 killing process with pid 1693379 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1693379 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1693379 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.051 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.955 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:59.955 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:59.955 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:59.955 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:01.340 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:03.873 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:09.147 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:09.148 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:09.148 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:09.148 Found net devices under 0000:af:00.0: cvl_0_0 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:09.148 Found net devices under 0000:af:00.1: cvl_0_1 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:09.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:22:09.148 00:22:09.148 --- 10.0.0.2 ping statistics --- 00:22:09.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.148 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:22:09.148 00:22:09.148 --- 10.0.0.1 ping statistics --- 00:22:09.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.148 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:09.148 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:09.149 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:09.149 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:09.149 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:09.149 net.core.busy_poll = 1 00:22:09.149 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:09.149 net.core.busy_read = 1 00:22:09.149 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:09.149 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:09.407 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:09.407 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:09.407 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:09.407 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:09.407 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:09.407 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:09.407 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.407 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1698196 00:22:09.407 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1698196 00:22:09.407 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:09.407 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1698196 ']' 00:22:09.407 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.407 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.407 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.407 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.407 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.407 [2024-12-10 14:24:10.098665] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:22:09.407 [2024-12-10 14:24:10.098722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.666 [2024-12-10 14:24:10.181728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.666 [2024-12-10 14:24:10.221750] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.666 [2024-12-10 14:24:10.221788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.666 [2024-12-10 14:24:10.221795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.666 [2024-12-10 14:24:10.221801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.666 [2024-12-10 14:24:10.221807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.666 [2024-12-10 14:24:10.223182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.666 [2024-12-10 14:24:10.223298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.666 [2024-12-10 14:24:10.223331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.666 [2024-12-10 14:24:10.223332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:10.232 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.232 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:10.232 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:10.232 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:10.232 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.491 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.491 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:10.491 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:10.491 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:10.491 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.491 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.491 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.491 [2024-12-10 14:24:11.114301] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.491 Malloc1 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.491 [2024-12-10 14:24:11.173737] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1698453 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:10.491 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:13.022 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:13.022 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.022 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.022 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.022 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:13.022 "tick_rate": 2100000000, 00:22:13.022 "poll_groups": [ 00:22:13.022 { 00:22:13.022 "name": "nvmf_tgt_poll_group_000", 00:22:13.022 "admin_qpairs": 1, 00:22:13.022 "io_qpairs": 4, 00:22:13.022 "current_admin_qpairs": 1, 00:22:13.022 "current_io_qpairs": 4, 00:22:13.023 "pending_bdev_io": 0, 00:22:13.023 "completed_nvme_io": 43891, 00:22:13.023 "transports": [ 00:22:13.023 { 00:22:13.023 "trtype": "TCP" 00:22:13.023 } 00:22:13.023 ] 00:22:13.023 }, 00:22:13.023 { 00:22:13.023 "name": "nvmf_tgt_poll_group_001", 00:22:13.023 "admin_qpairs": 0, 00:22:13.023 "io_qpairs": 0, 00:22:13.023 "current_admin_qpairs": 0, 00:22:13.023 "current_io_qpairs": 0, 00:22:13.023 "pending_bdev_io": 0, 00:22:13.023 "completed_nvme_io": 0, 00:22:13.023 "transports": [ 00:22:13.023 { 00:22:13.023 "trtype": "TCP" 00:22:13.023 } 00:22:13.023 ] 00:22:13.023 }, 00:22:13.023 { 00:22:13.023 "name": "nvmf_tgt_poll_group_002", 00:22:13.023 "admin_qpairs": 0, 00:22:13.023 "io_qpairs": 0, 00:22:13.023 "current_admin_qpairs": 0, 00:22:13.023 "current_io_qpairs": 0, 00:22:13.023 "pending_bdev_io": 0, 00:22:13.023 "completed_nvme_io": 0, 00:22:13.023 "transports": [ 00:22:13.023 { 00:22:13.023 "trtype": "TCP" 00:22:13.023 } 00:22:13.023 ] 00:22:13.023 }, 00:22:13.023 { 00:22:13.023 "name": "nvmf_tgt_poll_group_003", 00:22:13.023 "admin_qpairs": 0, 00:22:13.023 "io_qpairs": 0, 00:22:13.023 "current_admin_qpairs": 0, 00:22:13.023 "current_io_qpairs": 0, 00:22:13.023 "pending_bdev_io": 0, 00:22:13.023 "completed_nvme_io": 0, 00:22:13.023 "transports": [ 00:22:13.023 { 00:22:13.023 "trtype": "TCP" 00:22:13.023 } 00:22:13.023 ] 00:22:13.023 } 00:22:13.023 ] 00:22:13.023 }' 00:22:13.023 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:13.023 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:13.023 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:22:13.023 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:22:13.023 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1698453 00:22:21.132 Initializing NVMe Controllers 00:22:21.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:21.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:21.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:21.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:21.132 Initialization complete. Launching workers. 00:22:21.132 ======================================================== 00:22:21.132 Latency(us) 00:22:21.132 Device Information : IOPS MiB/s Average min max 00:22:21.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6108.70 23.86 10480.14 1238.30 54409.51 00:22:21.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6463.10 25.25 9904.72 1188.14 55818.47 00:22:21.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5710.90 22.31 11242.92 1463.74 58740.82 00:22:21.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5097.10 19.91 12581.59 1452.15 56464.85 00:22:21.132 ======================================================== 00:22:21.132 Total : 23379.80 91.33 10965.54 1188.14 58740.82 00:22:21.132 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.132 rmmod nvme_tcp 00:22:21.132 rmmod nvme_fabrics 00:22:21.132 rmmod nvme_keyring 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1698196 ']' 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1698196 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1698196 ']' 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1698196 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1698196 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1698196' 00:22:21.132 killing process with pid 1698196 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1698196 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1698196 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.132 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:24.423 00:22:24.423 real 0m54.094s 00:22:24.423 user 2m49.940s 00:22:24.423 sys 0m10.880s 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.423 ************************************ 00:22:24.423 END TEST nvmf_perf_adq 00:22:24.423 ************************************ 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:24.423 ************************************ 00:22:24.423 START TEST nvmf_shutdown 00:22:24.423 ************************************ 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:24.423 * Looking for test storage... 00:22:24.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.423 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:24.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.423 --rc genhtml_branch_coverage=1 00:22:24.423 --rc genhtml_function_coverage=1 00:22:24.423 --rc genhtml_legend=1 00:22:24.423 --rc geninfo_all_blocks=1 00:22:24.423 --rc geninfo_unexecuted_blocks=1 00:22:24.423 00:22:24.423 ' 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:24.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.423 --rc genhtml_branch_coverage=1 00:22:24.423 --rc genhtml_function_coverage=1 00:22:24.423 --rc genhtml_legend=1 00:22:24.423 --rc geninfo_all_blocks=1 00:22:24.423 --rc geninfo_unexecuted_blocks=1 00:22:24.423 00:22:24.423 ' 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:24.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.423 --rc genhtml_branch_coverage=1 00:22:24.423 --rc genhtml_function_coverage=1 00:22:24.423 --rc genhtml_legend=1 00:22:24.423 --rc geninfo_all_blocks=1 00:22:24.423 --rc geninfo_unexecuted_blocks=1 00:22:24.423 00:22:24.423 ' 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:24.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.423 --rc genhtml_branch_coverage=1 00:22:24.423 --rc genhtml_function_coverage=1 00:22:24.423 --rc genhtml_legend=1 00:22:24.423 --rc geninfo_all_blocks=1 00:22:24.423 --rc geninfo_unexecuted_blocks=1 00:22:24.423 00:22:24.423 ' 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.423 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:24.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:24.424 ************************************ 00:22:24.424 START TEST nvmf_shutdown_tc1 00:22:24.424 ************************************ 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:24.424 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:30.993 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:30.993 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:30.993 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:30.994 Found net devices under 0000:af:00.0: cvl_0_0 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:30.994 Found net devices under 0000:af:00.1: cvl_0_1 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.994 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.252 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.252 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.252 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:31.252 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.252 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.252 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.252 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:31.252 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:31.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:22:31.252 00:22:31.252 --- 10.0.0.2 ping statistics --- 00:22:31.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.252 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:22:31.252 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:22:31.252 00:22:31.252 --- 10.0.0.1 ping statistics --- 00:22:31.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.252 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:22:31.252 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.252 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1704146 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1704146 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1704146 ']' 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.253 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.253 [2024-12-10 14:24:31.984814] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:22:31.253 [2024-12-10 14:24:31.984864] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.511 [2024-12-10 14:24:32.067980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.511 [2024-12-10 14:24:32.109234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.511 [2024-12-10 14:24:32.109269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.511 [2024-12-10 14:24:32.109276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.511 [2024-12-10 14:24:32.109283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.511 [2024-12-10 14:24:32.109288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.511 [2024-12-10 14:24:32.110726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.511 [2024-12-10 14:24:32.110834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.511 [2024-12-10 14:24:32.110921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:31.511 [2024-12-10 14:24:32.110923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.511 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.511 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:31.511 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:31.511 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:31.511 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.511 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.511 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:31.511 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.511 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.511 [2024-12-10 14:24:32.248197] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.769 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.769 Malloc1 00:22:31.769 [2024-12-10 14:24:32.356656] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.769 Malloc2 00:22:31.769 Malloc3 00:22:31.769 Malloc4 00:22:31.769 Malloc5 00:22:32.027 Malloc6 00:22:32.027 Malloc7 00:22:32.027 Malloc8 00:22:32.027 Malloc9 00:22:32.027 Malloc10 00:22:32.027 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.027 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:32.027 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:32.027 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.285 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1704411 00:22:32.285 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1704411 /var/tmp/bdevperf.sock 00:22:32.285 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1704411 ']' 00:22:32.285 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.285 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:32.285 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:32.285 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.285 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.285 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:32.285 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.285 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:32.285 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.285 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.285 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.285 { 00:22:32.285 "params": { 00:22:32.285 "name": "Nvme$subsystem", 00:22:32.285 "trtype": "$TEST_TRANSPORT", 00:22:32.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.285 "adrfam": "ipv4", 00:22:32.285 "trsvcid": "$NVMF_PORT", 00:22:32.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.285 "hdgst": ${hdgst:-false}, 00:22:32.285 "ddgst": ${ddgst:-false} 00:22:32.285 }, 00:22:32.285 "method": "bdev_nvme_attach_controller" 00:22:32.285 } 00:22:32.285 EOF 00:22:32.285 )") 00:22:32.285 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.285 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.286 { 00:22:32.286 "params": { 00:22:32.286 "name": "Nvme$subsystem", 00:22:32.286 "trtype": "$TEST_TRANSPORT", 00:22:32.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.286 "adrfam": "ipv4", 00:22:32.286 "trsvcid": "$NVMF_PORT", 00:22:32.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.286 "hdgst": ${hdgst:-false}, 00:22:32.286 "ddgst": ${ddgst:-false} 00:22:32.286 }, 00:22:32.286 "method": "bdev_nvme_attach_controller" 00:22:32.286 } 00:22:32.286 EOF 00:22:32.286 )") 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.286 { 00:22:32.286 "params": { 00:22:32.286 "name": "Nvme$subsystem", 00:22:32.286 "trtype": "$TEST_TRANSPORT", 00:22:32.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.286 "adrfam": "ipv4", 00:22:32.286 "trsvcid": "$NVMF_PORT", 00:22:32.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.286 "hdgst": ${hdgst:-false}, 00:22:32.286 "ddgst": ${ddgst:-false} 00:22:32.286 }, 00:22:32.286 "method": "bdev_nvme_attach_controller" 00:22:32.286 } 00:22:32.286 EOF 00:22:32.286 )") 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.286 { 00:22:32.286 "params": { 00:22:32.286 "name": "Nvme$subsystem", 00:22:32.286 "trtype": "$TEST_TRANSPORT", 00:22:32.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.286 "adrfam": "ipv4", 00:22:32.286 "trsvcid": "$NVMF_PORT", 00:22:32.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.286 "hdgst": ${hdgst:-false}, 00:22:32.286 "ddgst": ${ddgst:-false} 00:22:32.286 }, 00:22:32.286 "method": "bdev_nvme_attach_controller" 00:22:32.286 } 00:22:32.286 EOF 00:22:32.286 )") 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.286 { 00:22:32.286 "params": { 00:22:32.286 "name": "Nvme$subsystem", 00:22:32.286 "trtype": "$TEST_TRANSPORT", 00:22:32.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.286 "adrfam": "ipv4", 00:22:32.286 "trsvcid": "$NVMF_PORT", 00:22:32.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.286 "hdgst": ${hdgst:-false}, 00:22:32.286 "ddgst": ${ddgst:-false} 00:22:32.286 }, 00:22:32.286 "method": "bdev_nvme_attach_controller" 00:22:32.286 } 00:22:32.286 EOF 00:22:32.286 )") 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.286 { 00:22:32.286 "params": { 00:22:32.286 "name": "Nvme$subsystem", 00:22:32.286 "trtype": "$TEST_TRANSPORT", 00:22:32.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.286 "adrfam": "ipv4", 00:22:32.286 "trsvcid": "$NVMF_PORT", 00:22:32.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.286 "hdgst": ${hdgst:-false}, 00:22:32.286 "ddgst": ${ddgst:-false} 00:22:32.286 }, 00:22:32.286 "method": "bdev_nvme_attach_controller" 00:22:32.286 } 00:22:32.286 EOF 00:22:32.286 )") 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.286 [2024-12-10 14:24:32.826139] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:22:32.286 [2024-12-10 14:24:32.826190] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.286 { 00:22:32.286 "params": { 00:22:32.286 "name": "Nvme$subsystem", 00:22:32.286 "trtype": "$TEST_TRANSPORT", 00:22:32.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.286 "adrfam": "ipv4", 00:22:32.286 "trsvcid": "$NVMF_PORT", 00:22:32.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.286 "hdgst": ${hdgst:-false}, 00:22:32.286 "ddgst": ${ddgst:-false} 00:22:32.286 }, 00:22:32.286 "method": "bdev_nvme_attach_controller" 00:22:32.286 } 00:22:32.286 EOF 00:22:32.286 )") 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.286 { 00:22:32.286 "params": { 00:22:32.286 "name": "Nvme$subsystem", 00:22:32.286 "trtype": "$TEST_TRANSPORT", 00:22:32.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.286 "adrfam": "ipv4", 00:22:32.286 "trsvcid": "$NVMF_PORT", 00:22:32.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.286 "hdgst": ${hdgst:-false}, 00:22:32.286 "ddgst": ${ddgst:-false} 00:22:32.286 }, 00:22:32.286 "method": "bdev_nvme_attach_controller" 00:22:32.286 } 00:22:32.286 EOF 00:22:32.286 )") 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.286 { 00:22:32.286 "params": { 00:22:32.286 "name": "Nvme$subsystem", 00:22:32.286 "trtype": "$TEST_TRANSPORT", 00:22:32.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.286 "adrfam": "ipv4", 00:22:32.286 "trsvcid": "$NVMF_PORT", 00:22:32.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.286 "hdgst": ${hdgst:-false}, 00:22:32.286 "ddgst": ${ddgst:-false} 00:22:32.286 }, 00:22:32.286 "method": "bdev_nvme_attach_controller" 00:22:32.286 } 00:22:32.286 EOF 00:22:32.286 )") 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.286 { 00:22:32.286 "params": { 00:22:32.286 "name": "Nvme$subsystem", 00:22:32.286 "trtype": "$TEST_TRANSPORT", 00:22:32.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.286 "adrfam": "ipv4", 00:22:32.286 "trsvcid": "$NVMF_PORT", 00:22:32.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.286 "hdgst": ${hdgst:-false}, 00:22:32.286 "ddgst": ${ddgst:-false} 00:22:32.286 }, 00:22:32.286 "method": "bdev_nvme_attach_controller" 00:22:32.286 } 00:22:32.286 EOF 00:22:32.286 )") 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:32.286 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:32.286 "params": { 00:22:32.286 "name": "Nvme1", 00:22:32.286 "trtype": "tcp", 00:22:32.286 "traddr": "10.0.0.2", 00:22:32.286 "adrfam": "ipv4", 00:22:32.286 "trsvcid": "4420", 00:22:32.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.286 "hdgst": false, 00:22:32.286 "ddgst": false 00:22:32.286 }, 00:22:32.286 "method": "bdev_nvme_attach_controller" 00:22:32.286 },{ 00:22:32.286 "params": { 00:22:32.286 "name": "Nvme2", 00:22:32.286 "trtype": "tcp", 00:22:32.286 "traddr": "10.0.0.2", 00:22:32.286 "adrfam": "ipv4", 00:22:32.286 "trsvcid": "4420", 00:22:32.286 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:32.286 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:32.286 "hdgst": false, 00:22:32.286 "ddgst": false 00:22:32.286 }, 00:22:32.286 "method": "bdev_nvme_attach_controller" 00:22:32.286 },{ 00:22:32.286 "params": { 00:22:32.286 "name": "Nvme3", 00:22:32.286 "trtype": "tcp", 00:22:32.286 "traddr": "10.0.0.2", 00:22:32.286 "adrfam": "ipv4", 00:22:32.286 "trsvcid": "4420", 00:22:32.286 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:32.286 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:32.286 "hdgst": false, 00:22:32.286 "ddgst": false 00:22:32.286 }, 00:22:32.286 "method": "bdev_nvme_attach_controller" 00:22:32.286 },{ 00:22:32.286 "params": { 00:22:32.286 "name": "Nvme4", 00:22:32.286 "trtype": "tcp", 00:22:32.286 "traddr": "10.0.0.2", 00:22:32.286 "adrfam": "ipv4", 00:22:32.287 "trsvcid": "4420", 00:22:32.287 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:32.287 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:32.287 "hdgst": false, 00:22:32.287 "ddgst": false 00:22:32.287 }, 00:22:32.287 "method": "bdev_nvme_attach_controller" 00:22:32.287 },{ 00:22:32.287 "params": { 00:22:32.287 "name": "Nvme5", 00:22:32.287 "trtype": "tcp", 00:22:32.287 "traddr": "10.0.0.2", 00:22:32.287 "adrfam": "ipv4", 00:22:32.287 "trsvcid": "4420", 00:22:32.287 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:32.287 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:32.287 "hdgst": false, 00:22:32.287 "ddgst": false 00:22:32.287 }, 00:22:32.287 "method": "bdev_nvme_attach_controller" 00:22:32.287 },{ 00:22:32.287 "params": { 00:22:32.287 "name": "Nvme6", 00:22:32.287 "trtype": "tcp", 00:22:32.287 "traddr": "10.0.0.2", 00:22:32.287 "adrfam": "ipv4", 00:22:32.287 "trsvcid": "4420", 00:22:32.287 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:32.287 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:32.287 "hdgst": false, 00:22:32.287 "ddgst": false 00:22:32.287 }, 00:22:32.287 "method": "bdev_nvme_attach_controller" 00:22:32.287 },{ 00:22:32.287 "params": { 00:22:32.287 "name": "Nvme7", 00:22:32.287 "trtype": "tcp", 00:22:32.287 "traddr": "10.0.0.2", 00:22:32.287 "adrfam": "ipv4", 00:22:32.287 "trsvcid": "4420", 00:22:32.287 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:32.287 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:32.287 "hdgst": false, 00:22:32.287 "ddgst": false 00:22:32.287 }, 00:22:32.287 "method": "bdev_nvme_attach_controller" 00:22:32.287 },{ 00:22:32.287 "params": { 00:22:32.287 "name": "Nvme8", 00:22:32.287 "trtype": "tcp", 00:22:32.287 "traddr": "10.0.0.2", 00:22:32.287 "adrfam": "ipv4", 00:22:32.287 "trsvcid": "4420", 00:22:32.287 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:32.287 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:32.287 "hdgst": false, 00:22:32.287 "ddgst": false 00:22:32.287 }, 00:22:32.287 "method": "bdev_nvme_attach_controller" 00:22:32.287 },{ 00:22:32.287 "params": { 00:22:32.287 "name": "Nvme9", 00:22:32.287 "trtype": "tcp", 00:22:32.287 "traddr": "10.0.0.2", 00:22:32.287 "adrfam": "ipv4", 00:22:32.287 "trsvcid": "4420", 00:22:32.287 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:32.287 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:32.287 "hdgst": false, 00:22:32.287 "ddgst": false 00:22:32.287 }, 00:22:32.287 "method": "bdev_nvme_attach_controller" 00:22:32.287 },{ 00:22:32.287 "params": { 00:22:32.287 "name": "Nvme10", 00:22:32.287 "trtype": "tcp", 00:22:32.287 "traddr": "10.0.0.2", 00:22:32.287 "adrfam": "ipv4", 00:22:32.287 "trsvcid": "4420", 00:22:32.287 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:32.287 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:32.287 "hdgst": false, 00:22:32.287 "ddgst": false 00:22:32.287 }, 00:22:32.287 "method": "bdev_nvme_attach_controller" 00:22:32.287 }' 00:22:32.287 [2024-12-10 14:24:32.908251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.287 [2024-12-10 14:24:32.947817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.182 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.182 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:34.182 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:34.182 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.182 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.182 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.182 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1704411 00:22:34.182 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:34.182 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:35.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1704411 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:35.112 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1704146 00:22:35.112 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:35.112 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:35.112 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:35.112 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:35.112 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.112 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.112 { 00:22:35.112 "params": { 00:22:35.112 "name": "Nvme$subsystem", 00:22:35.112 "trtype": "$TEST_TRANSPORT", 00:22:35.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.112 "adrfam": "ipv4", 00:22:35.112 "trsvcid": "$NVMF_PORT", 00:22:35.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.112 "hdgst": ${hdgst:-false}, 00:22:35.112 "ddgst": ${ddgst:-false} 00:22:35.112 }, 00:22:35.112 "method": "bdev_nvme_attach_controller" 00:22:35.112 } 00:22:35.112 EOF 00:22:35.112 )") 00:22:35.112 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.112 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.112 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.112 { 00:22:35.112 "params": { 00:22:35.112 "name": "Nvme$subsystem", 00:22:35.112 "trtype": "$TEST_TRANSPORT", 00:22:35.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.112 "adrfam": "ipv4", 00:22:35.112 "trsvcid": "$NVMF_PORT", 00:22:35.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.112 "hdgst": ${hdgst:-false}, 00:22:35.112 "ddgst": ${ddgst:-false} 00:22:35.112 }, 00:22:35.112 "method": "bdev_nvme_attach_controller" 00:22:35.112 } 00:22:35.112 EOF 00:22:35.112 )") 00:22:35.112 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.112 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.112 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.112 { 00:22:35.112 "params": { 00:22:35.112 "name": "Nvme$subsystem", 00:22:35.112 "trtype": "$TEST_TRANSPORT", 00:22:35.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.112 "adrfam": "ipv4", 00:22:35.112 "trsvcid": "$NVMF_PORT", 00:22:35.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.112 "hdgst": ${hdgst:-false}, 00:22:35.112 "ddgst": ${ddgst:-false} 00:22:35.112 }, 00:22:35.112 "method": "bdev_nvme_attach_controller" 00:22:35.112 } 00:22:35.112 EOF 00:22:35.112 )") 00:22:35.112 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.112 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.112 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.112 { 00:22:35.112 "params": { 00:22:35.112 "name": "Nvme$subsystem", 00:22:35.112 "trtype": "$TEST_TRANSPORT", 00:22:35.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.112 "adrfam": "ipv4", 00:22:35.112 "trsvcid": "$NVMF_PORT", 00:22:35.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.113 "hdgst": ${hdgst:-false}, 00:22:35.113 "ddgst": ${ddgst:-false} 00:22:35.113 }, 00:22:35.113 "method": "bdev_nvme_attach_controller" 00:22:35.113 } 00:22:35.113 EOF 00:22:35.113 )") 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.113 { 00:22:35.113 "params": { 00:22:35.113 "name": "Nvme$subsystem", 00:22:35.113 "trtype": "$TEST_TRANSPORT", 00:22:35.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.113 "adrfam": "ipv4", 00:22:35.113 "trsvcid": "$NVMF_PORT", 00:22:35.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.113 "hdgst": ${hdgst:-false}, 00:22:35.113 "ddgst": ${ddgst:-false} 00:22:35.113 }, 00:22:35.113 "method": "bdev_nvme_attach_controller" 00:22:35.113 } 00:22:35.113 EOF 00:22:35.113 )") 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.113 { 00:22:35.113 "params": { 00:22:35.113 "name": "Nvme$subsystem", 00:22:35.113 "trtype": "$TEST_TRANSPORT", 00:22:35.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.113 "adrfam": "ipv4", 00:22:35.113 "trsvcid": "$NVMF_PORT", 00:22:35.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.113 "hdgst": ${hdgst:-false}, 00:22:35.113 "ddgst": ${ddgst:-false} 00:22:35.113 }, 00:22:35.113 "method": "bdev_nvme_attach_controller" 00:22:35.113 } 00:22:35.113 EOF 00:22:35.113 )") 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.113 { 00:22:35.113 "params": { 00:22:35.113 "name": "Nvme$subsystem", 00:22:35.113 "trtype": "$TEST_TRANSPORT", 00:22:35.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.113 "adrfam": "ipv4", 00:22:35.113 "trsvcid": "$NVMF_PORT", 00:22:35.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.113 "hdgst": ${hdgst:-false}, 00:22:35.113 "ddgst": ${ddgst:-false} 00:22:35.113 }, 00:22:35.113 "method": "bdev_nvme_attach_controller" 00:22:35.113 } 00:22:35.113 EOF 00:22:35.113 )") 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.113 [2024-12-10 14:24:35.759582] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:22:35.113 [2024-12-10 14:24:35.759630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1704887 ] 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.113 { 00:22:35.113 "params": { 00:22:35.113 "name": "Nvme$subsystem", 00:22:35.113 "trtype": "$TEST_TRANSPORT", 00:22:35.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.113 "adrfam": "ipv4", 00:22:35.113 "trsvcid": "$NVMF_PORT", 00:22:35.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.113 "hdgst": ${hdgst:-false}, 00:22:35.113 "ddgst": ${ddgst:-false} 00:22:35.113 }, 00:22:35.113 "method": "bdev_nvme_attach_controller" 00:22:35.113 } 00:22:35.113 EOF 00:22:35.113 )") 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.113 { 00:22:35.113 "params": { 00:22:35.113 "name": "Nvme$subsystem", 00:22:35.113 "trtype": "$TEST_TRANSPORT", 00:22:35.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.113 "adrfam": "ipv4", 00:22:35.113 "trsvcid": "$NVMF_PORT", 00:22:35.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.113 "hdgst": ${hdgst:-false}, 00:22:35.113 "ddgst": ${ddgst:-false} 00:22:35.113 }, 00:22:35.113 "method": "bdev_nvme_attach_controller" 00:22:35.113 } 00:22:35.113 EOF 00:22:35.113 )") 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.113 { 00:22:35.113 "params": { 00:22:35.113 "name": "Nvme$subsystem", 00:22:35.113 "trtype": "$TEST_TRANSPORT", 00:22:35.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.113 "adrfam": "ipv4", 00:22:35.113 "trsvcid": "$NVMF_PORT", 00:22:35.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.113 "hdgst": ${hdgst:-false}, 00:22:35.113 "ddgst": ${ddgst:-false} 00:22:35.113 }, 00:22:35.113 "method": "bdev_nvme_attach_controller" 00:22:35.113 } 00:22:35.113 EOF 00:22:35.113 )") 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:35.113 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:35.113 "params": { 00:22:35.113 "name": "Nvme1", 00:22:35.113 "trtype": "tcp", 00:22:35.113 "traddr": "10.0.0.2", 00:22:35.113 "adrfam": "ipv4", 00:22:35.113 "trsvcid": "4420", 00:22:35.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:35.113 "hdgst": false, 00:22:35.113 "ddgst": false 00:22:35.113 }, 00:22:35.113 "method": "bdev_nvme_attach_controller" 00:22:35.113 },{ 00:22:35.113 "params": { 00:22:35.113 "name": "Nvme2", 00:22:35.113 "trtype": "tcp", 00:22:35.113 "traddr": "10.0.0.2", 00:22:35.113 "adrfam": "ipv4", 00:22:35.113 "trsvcid": "4420", 00:22:35.113 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:35.113 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:35.113 "hdgst": false, 00:22:35.113 "ddgst": false 00:22:35.113 }, 00:22:35.113 "method": "bdev_nvme_attach_controller" 00:22:35.113 },{ 00:22:35.113 "params": { 00:22:35.113 "name": "Nvme3", 00:22:35.113 "trtype": "tcp", 00:22:35.113 "traddr": "10.0.0.2", 00:22:35.113 "adrfam": "ipv4", 00:22:35.113 "trsvcid": "4420", 00:22:35.113 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:35.113 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:35.113 "hdgst": false, 00:22:35.113 "ddgst": false 00:22:35.113 }, 00:22:35.113 "method": "bdev_nvme_attach_controller" 00:22:35.113 },{ 00:22:35.113 "params": { 00:22:35.113 "name": "Nvme4", 00:22:35.113 "trtype": "tcp", 00:22:35.113 "traddr": "10.0.0.2", 00:22:35.113 "adrfam": "ipv4", 00:22:35.113 "trsvcid": "4420", 00:22:35.113 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:35.113 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:35.113 "hdgst": false, 00:22:35.113 "ddgst": false 00:22:35.113 }, 00:22:35.113 "method": "bdev_nvme_attach_controller" 00:22:35.113 },{ 00:22:35.113 "params": { 00:22:35.113 "name": "Nvme5", 00:22:35.113 "trtype": "tcp", 00:22:35.113 "traddr": "10.0.0.2", 00:22:35.113 "adrfam": "ipv4", 00:22:35.113 "trsvcid": "4420", 00:22:35.113 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:35.113 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:35.113 "hdgst": false, 00:22:35.113 "ddgst": false 00:22:35.113 }, 00:22:35.113 "method": "bdev_nvme_attach_controller" 00:22:35.113 },{ 00:22:35.113 "params": { 00:22:35.113 "name": "Nvme6", 00:22:35.113 "trtype": "tcp", 00:22:35.113 "traddr": "10.0.0.2", 00:22:35.113 "adrfam": "ipv4", 00:22:35.113 "trsvcid": "4420", 00:22:35.113 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:35.113 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:35.113 "hdgst": false, 00:22:35.113 "ddgst": false 00:22:35.113 }, 00:22:35.113 "method": "bdev_nvme_attach_controller" 00:22:35.113 },{ 00:22:35.113 "params": { 00:22:35.113 "name": "Nvme7", 00:22:35.113 "trtype": "tcp", 00:22:35.113 "traddr": "10.0.0.2", 00:22:35.113 "adrfam": "ipv4", 00:22:35.113 "trsvcid": "4420", 00:22:35.113 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:35.113 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:35.113 "hdgst": false, 00:22:35.113 "ddgst": false 00:22:35.113 }, 00:22:35.113 "method": "bdev_nvme_attach_controller" 00:22:35.113 },{ 00:22:35.113 "params": { 00:22:35.113 "name": "Nvme8", 00:22:35.113 "trtype": "tcp", 00:22:35.113 "traddr": "10.0.0.2", 00:22:35.113 "adrfam": "ipv4", 00:22:35.113 "trsvcid": "4420", 00:22:35.113 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:35.113 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:35.113 "hdgst": false, 00:22:35.114 "ddgst": false 00:22:35.114 }, 00:22:35.114 "method": "bdev_nvme_attach_controller" 00:22:35.114 },{ 00:22:35.114 "params": { 00:22:35.114 "name": "Nvme9", 00:22:35.114 "trtype": "tcp", 00:22:35.114 "traddr": "10.0.0.2", 00:22:35.114 "adrfam": "ipv4", 00:22:35.114 "trsvcid": "4420", 00:22:35.114 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:35.114 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:35.114 "hdgst": false, 00:22:35.114 "ddgst": false 00:22:35.114 }, 00:22:35.114 "method": "bdev_nvme_attach_controller" 00:22:35.114 },{ 00:22:35.114 "params": { 00:22:35.114 "name": "Nvme10", 00:22:35.114 "trtype": "tcp", 00:22:35.114 "traddr": "10.0.0.2", 00:22:35.114 "adrfam": "ipv4", 00:22:35.114 "trsvcid": "4420", 00:22:35.114 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:35.114 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:35.114 "hdgst": false, 00:22:35.114 "ddgst": false 00:22:35.114 }, 00:22:35.114 "method": "bdev_nvme_attach_controller" 00:22:35.114 }' 00:22:35.371 [2024-12-10 14:24:35.859996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.371 [2024-12-10 14:24:35.899790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.743 Running I/O for 1 seconds... 00:22:37.697 2248.00 IOPS, 140.50 MiB/s 00:22:37.698 Latency(us) 00:22:37.698 [2024-12-10T13:24:38.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.698 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.698 Verification LBA range: start 0x0 length 0x400 00:22:37.698 Nvme1n1 : 1.08 301.01 18.81 0.00 0.00 209788.27 8862.96 188743.68 00:22:37.698 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.698 Verification LBA range: start 0x0 length 0x400 00:22:37.698 Nvme2n1 : 1.07 238.42 14.90 0.00 0.00 262099.14 19099.06 225693.50 00:22:37.698 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.698 Verification LBA range: start 0x0 length 0x400 00:22:37.698 Nvme3n1 : 1.14 285.14 17.82 0.00 0.00 215835.79 2652.65 216705.71 00:22:37.698 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.698 Verification LBA range: start 0x0 length 0x400 00:22:37.698 Nvme4n1 : 1.14 290.73 18.17 0.00 0.00 208457.58 3183.18 217704.35 00:22:37.698 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.698 Verification LBA range: start 0x0 length 0x400 00:22:37.698 Nvme5n1 : 1.15 278.53 17.41 0.00 0.00 215379.97 14730.00 225693.50 00:22:37.698 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.698 Verification LBA range: start 0x0 length 0x400 00:22:37.698 Nvme6n1 : 1.15 279.44 17.47 0.00 0.00 211555.47 17601.10 226692.14 00:22:37.698 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.698 Verification LBA range: start 0x0 length 0x400 00:22:37.698 Nvme7n1 : 1.12 284.99 17.81 0.00 0.00 204057.06 16352.79 211712.49 00:22:37.698 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.698 Verification LBA range: start 0x0 length 0x400 00:22:37.698 Nvme8n1 : 1.14 280.34 17.52 0.00 0.00 204697.40 16852.11 214708.42 00:22:37.698 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.698 Verification LBA range: start 0x0 length 0x400 00:22:37.698 Nvme9n1 : 1.15 277.61 17.35 0.00 0.00 203857.53 18225.25 223696.21 00:22:37.698 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.698 Verification LBA range: start 0x0 length 0x400 00:22:37.698 Nvme10n1 : 1.16 277.02 17.31 0.00 0.00 201294.21 14667.58 245666.38 00:22:37.698 [2024-12-10T13:24:38.438Z] =================================================================================================================== 00:22:37.698 [2024-12-10T13:24:38.438Z] Total : 2793.23 174.58 0.00 0.00 212700.38 2652.65 245666.38 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:37.969 rmmod nvme_tcp 00:22:37.969 rmmod nvme_fabrics 00:22:37.969 rmmod nvme_keyring 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1704146 ']' 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1704146 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1704146 ']' 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1704146 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1704146 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1704146' 00:22:37.969 killing process with pid 1704146 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1704146 00:22:37.969 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1704146 00:22:38.536 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:38.536 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:38.536 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:38.536 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:38.536 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:38.536 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:38.536 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:38.536 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:38.536 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:38.536 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.536 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.536 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:40.442 00:22:40.442 real 0m15.968s 00:22:40.442 user 0m33.352s 00:22:40.442 sys 0m6.518s 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.442 ************************************ 00:22:40.442 END TEST nvmf_shutdown_tc1 00:22:40.442 ************************************ 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:40.442 ************************************ 00:22:40.442 START TEST nvmf_shutdown_tc2 00:22:40.442 ************************************ 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.442 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:40.443 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:40.443 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:40.443 Found net devices under 0000:af:00.0: cvl_0_0 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:40.443 Found net devices under 0000:af:00.1: cvl_0_1 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.443 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:22:40.703 00:22:40.703 --- 10.0.0.2 ping statistics --- 00:22:40.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.703 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:22:40.703 00:22:40.703 --- 10.0.0.1 ping statistics --- 00:22:40.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.703 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1705915 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1705915 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1705915 ']' 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.703 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.962 [2024-12-10 14:24:41.488598] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:22:40.962 [2024-12-10 14:24:41.488652] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.962 [2024-12-10 14:24:41.575845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.962 [2024-12-10 14:24:41.614550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.962 [2024-12-10 14:24:41.614588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.962 [2024-12-10 14:24:41.614595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.962 [2024-12-10 14:24:41.614601] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.962 [2024-12-10 14:24:41.614605] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.962 [2024-12-10 14:24:41.616017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.962 [2024-12-10 14:24:41.616108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.962 [2024-12-10 14:24:41.616193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.962 [2024-12-10 14:24:41.616194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.899 [2024-12-10 14:24:42.371041] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.899 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.899 Malloc1 00:22:41.899 [2024-12-10 14:24:42.477111] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.899 Malloc2 00:22:41.899 Malloc3 00:22:41.899 Malloc4 00:22:41.899 Malloc5 00:22:42.158 Malloc6 00:22:42.158 Malloc7 00:22:42.158 Malloc8 00:22:42.158 Malloc9 00:22:42.158 Malloc10 00:22:42.158 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.158 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:42.158 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:42.158 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1706191 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1706191 /var/tmp/bdevperf.sock 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1706191 ']' 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:42.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.418 { 00:22:42.418 "params": { 00:22:42.418 "name": "Nvme$subsystem", 00:22:42.418 "trtype": "$TEST_TRANSPORT", 00:22:42.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.418 "adrfam": "ipv4", 00:22:42.418 "trsvcid": "$NVMF_PORT", 00:22:42.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.418 "hdgst": ${hdgst:-false}, 00:22:42.418 "ddgst": ${ddgst:-false} 00:22:42.418 }, 00:22:42.418 "method": "bdev_nvme_attach_controller" 00:22:42.418 } 00:22:42.418 EOF 00:22:42.418 )") 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.418 { 00:22:42.418 "params": { 00:22:42.418 "name": "Nvme$subsystem", 00:22:42.418 "trtype": "$TEST_TRANSPORT", 00:22:42.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.418 "adrfam": "ipv4", 00:22:42.418 "trsvcid": "$NVMF_PORT", 00:22:42.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.418 "hdgst": ${hdgst:-false}, 00:22:42.418 "ddgst": ${ddgst:-false} 00:22:42.418 }, 00:22:42.418 "method": "bdev_nvme_attach_controller" 00:22:42.418 } 00:22:42.418 EOF 00:22:42.418 )") 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.418 { 00:22:42.418 "params": { 00:22:42.418 "name": "Nvme$subsystem", 00:22:42.418 "trtype": "$TEST_TRANSPORT", 00:22:42.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.418 "adrfam": "ipv4", 00:22:42.418 "trsvcid": "$NVMF_PORT", 00:22:42.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.418 "hdgst": ${hdgst:-false}, 00:22:42.418 "ddgst": ${ddgst:-false} 00:22:42.418 }, 00:22:42.418 "method": "bdev_nvme_attach_controller" 00:22:42.418 } 00:22:42.418 EOF 00:22:42.418 )") 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.418 { 00:22:42.418 "params": { 00:22:42.418 "name": "Nvme$subsystem", 00:22:42.418 "trtype": "$TEST_TRANSPORT", 00:22:42.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.418 "adrfam": "ipv4", 00:22:42.418 "trsvcid": "$NVMF_PORT", 00:22:42.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.418 "hdgst": ${hdgst:-false}, 00:22:42.418 "ddgst": ${ddgst:-false} 00:22:42.418 }, 00:22:42.418 "method": "bdev_nvme_attach_controller" 00:22:42.418 } 00:22:42.418 EOF 00:22:42.418 )") 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.418 { 00:22:42.418 "params": { 00:22:42.418 "name": "Nvme$subsystem", 00:22:42.418 "trtype": "$TEST_TRANSPORT", 00:22:42.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.418 "adrfam": "ipv4", 00:22:42.418 "trsvcid": "$NVMF_PORT", 00:22:42.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.418 "hdgst": ${hdgst:-false}, 00:22:42.418 "ddgst": ${ddgst:-false} 00:22:42.418 }, 00:22:42.418 "method": "bdev_nvme_attach_controller" 00:22:42.418 } 00:22:42.418 EOF 00:22:42.418 )") 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.418 { 00:22:42.418 "params": { 00:22:42.418 "name": "Nvme$subsystem", 00:22:42.418 "trtype": "$TEST_TRANSPORT", 00:22:42.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.418 "adrfam": "ipv4", 00:22:42.418 "trsvcid": "$NVMF_PORT", 00:22:42.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.418 "hdgst": ${hdgst:-false}, 00:22:42.418 "ddgst": ${ddgst:-false} 00:22:42.418 }, 00:22:42.418 "method": "bdev_nvme_attach_controller" 00:22:42.418 } 00:22:42.418 EOF 00:22:42.418 )") 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.418 { 00:22:42.418 "params": { 00:22:42.418 "name": "Nvme$subsystem", 00:22:42.418 "trtype": "$TEST_TRANSPORT", 00:22:42.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.418 "adrfam": "ipv4", 00:22:42.418 "trsvcid": "$NVMF_PORT", 00:22:42.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.418 "hdgst": ${hdgst:-false}, 00:22:42.418 "ddgst": ${ddgst:-false} 00:22:42.418 }, 00:22:42.418 "method": "bdev_nvme_attach_controller" 00:22:42.418 } 00:22:42.418 EOF 00:22:42.418 )") 00:22:42.418 [2024-12-10 14:24:42.949448] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:22:42.418 [2024-12-10 14:24:42.949496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706191 ] 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.418 { 00:22:42.418 "params": { 00:22:42.418 "name": "Nvme$subsystem", 00:22:42.418 "trtype": "$TEST_TRANSPORT", 00:22:42.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.418 "adrfam": "ipv4", 00:22:42.418 "trsvcid": "$NVMF_PORT", 00:22:42.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.418 "hdgst": ${hdgst:-false}, 00:22:42.418 "ddgst": ${ddgst:-false} 00:22:42.418 }, 00:22:42.418 "method": "bdev_nvme_attach_controller" 00:22:42.418 } 00:22:42.418 EOF 00:22:42.418 )") 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.418 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.418 { 00:22:42.418 "params": { 00:22:42.418 "name": "Nvme$subsystem", 00:22:42.418 "trtype": "$TEST_TRANSPORT", 00:22:42.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.418 "adrfam": "ipv4", 00:22:42.418 "trsvcid": "$NVMF_PORT", 00:22:42.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.419 "hdgst": ${hdgst:-false}, 00:22:42.419 "ddgst": ${ddgst:-false} 00:22:42.419 }, 00:22:42.419 "method": "bdev_nvme_attach_controller" 00:22:42.419 } 00:22:42.419 EOF 00:22:42.419 )") 00:22:42.419 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:42.419 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.419 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.419 { 00:22:42.419 "params": { 00:22:42.419 "name": "Nvme$subsystem", 00:22:42.419 "trtype": "$TEST_TRANSPORT", 00:22:42.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.419 "adrfam": "ipv4", 00:22:42.419 "trsvcid": "$NVMF_PORT", 00:22:42.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.419 "hdgst": ${hdgst:-false}, 00:22:42.419 "ddgst": ${ddgst:-false} 00:22:42.419 }, 00:22:42.419 "method": "bdev_nvme_attach_controller" 00:22:42.419 } 00:22:42.419 EOF 00:22:42.419 )") 00:22:42.419 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:42.419 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:42.419 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:42.419 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:42.419 "params": { 00:22:42.419 "name": "Nvme1", 00:22:42.419 "trtype": "tcp", 00:22:42.419 "traddr": "10.0.0.2", 00:22:42.419 "adrfam": "ipv4", 00:22:42.419 "trsvcid": "4420", 00:22:42.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:42.419 "hdgst": false, 00:22:42.419 "ddgst": false 00:22:42.419 }, 00:22:42.419 "method": "bdev_nvme_attach_controller" 00:22:42.419 },{ 00:22:42.419 "params": { 00:22:42.419 "name": "Nvme2", 00:22:42.419 "trtype": "tcp", 00:22:42.419 "traddr": "10.0.0.2", 00:22:42.419 "adrfam": "ipv4", 00:22:42.419 "trsvcid": "4420", 00:22:42.419 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:42.419 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:42.419 "hdgst": false, 00:22:42.419 "ddgst": false 00:22:42.419 }, 00:22:42.419 "method": "bdev_nvme_attach_controller" 00:22:42.419 },{ 00:22:42.419 "params": { 00:22:42.419 "name": "Nvme3", 00:22:42.419 "trtype": "tcp", 00:22:42.419 "traddr": "10.0.0.2", 00:22:42.419 "adrfam": "ipv4", 00:22:42.419 "trsvcid": "4420", 00:22:42.419 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:42.419 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:42.419 "hdgst": false, 00:22:42.419 "ddgst": false 00:22:42.419 }, 00:22:42.419 "method": "bdev_nvme_attach_controller" 00:22:42.419 },{ 00:22:42.419 "params": { 00:22:42.419 "name": "Nvme4", 00:22:42.419 "trtype": "tcp", 00:22:42.419 "traddr": "10.0.0.2", 00:22:42.419 "adrfam": "ipv4", 00:22:42.419 "trsvcid": "4420", 00:22:42.419 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:42.419 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:42.419 "hdgst": false, 00:22:42.419 "ddgst": false 00:22:42.419 }, 00:22:42.419 "method": "bdev_nvme_attach_controller" 00:22:42.419 },{ 00:22:42.419 "params": { 00:22:42.419 "name": "Nvme5", 00:22:42.419 "trtype": "tcp", 00:22:42.419 "traddr": "10.0.0.2", 00:22:42.419 "adrfam": "ipv4", 00:22:42.419 "trsvcid": "4420", 00:22:42.419 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:42.419 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:42.419 "hdgst": false, 00:22:42.419 "ddgst": false 00:22:42.419 }, 00:22:42.419 "method": "bdev_nvme_attach_controller" 00:22:42.419 },{ 00:22:42.419 "params": { 00:22:42.419 "name": "Nvme6", 00:22:42.419 "trtype": "tcp", 00:22:42.419 "traddr": "10.0.0.2", 00:22:42.419 "adrfam": "ipv4", 00:22:42.419 "trsvcid": "4420", 00:22:42.419 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:42.419 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:42.419 "hdgst": false, 00:22:42.419 "ddgst": false 00:22:42.419 }, 00:22:42.419 "method": "bdev_nvme_attach_controller" 00:22:42.419 },{ 00:22:42.419 "params": { 00:22:42.419 "name": "Nvme7", 00:22:42.419 "trtype": "tcp", 00:22:42.419 "traddr": "10.0.0.2", 00:22:42.419 "adrfam": "ipv4", 00:22:42.419 "trsvcid": "4420", 00:22:42.419 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:42.419 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:42.419 "hdgst": false, 00:22:42.419 "ddgst": false 00:22:42.419 }, 00:22:42.419 "method": "bdev_nvme_attach_controller" 00:22:42.419 },{ 00:22:42.419 "params": { 00:22:42.419 "name": "Nvme8", 00:22:42.419 "trtype": "tcp", 00:22:42.419 "traddr": "10.0.0.2", 00:22:42.419 "adrfam": "ipv4", 00:22:42.419 "trsvcid": "4420", 00:22:42.419 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:42.419 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:42.419 "hdgst": false, 00:22:42.419 "ddgst": false 00:22:42.419 }, 00:22:42.419 "method": "bdev_nvme_attach_controller" 00:22:42.419 },{ 00:22:42.419 "params": { 00:22:42.419 "name": "Nvme9", 00:22:42.419 "trtype": "tcp", 00:22:42.419 "traddr": "10.0.0.2", 00:22:42.419 "adrfam": "ipv4", 00:22:42.419 "trsvcid": "4420", 00:22:42.419 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:42.419 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:42.419 "hdgst": false, 00:22:42.419 "ddgst": false 00:22:42.419 }, 00:22:42.419 "method": "bdev_nvme_attach_controller" 00:22:42.419 },{ 00:22:42.419 "params": { 00:22:42.419 "name": "Nvme10", 00:22:42.419 "trtype": "tcp", 00:22:42.419 "traddr": "10.0.0.2", 00:22:42.419 "adrfam": "ipv4", 00:22:42.419 "trsvcid": "4420", 00:22:42.419 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:42.419 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:42.419 "hdgst": false, 00:22:42.419 "ddgst": false 00:22:42.419 }, 00:22:42.419 "method": "bdev_nvme_attach_controller" 00:22:42.419 }' 00:22:42.419 [2024-12-10 14:24:43.032576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.419 [2024-12-10 14:24:43.072370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.796 Running I/O for 10 seconds... 00:22:43.796 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.796 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:43.796 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:43.796 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.796 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.055 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.055 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:44.055 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:44.055 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:44.055 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:44.055 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:44.055 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:44.055 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:44.055 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:44.055 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:44.055 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.055 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.055 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.055 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:44.055 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:44.055 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:44.314 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:44.314 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:44.314 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:44.314 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:44.314 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.314 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.314 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.314 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=88 00:22:44.314 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 88 -ge 100 ']' 00:22:44.314 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1706191 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1706191 ']' 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1706191 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.578 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1706191 00:22:44.838 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:44.838 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:44.838 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1706191' 00:22:44.838 killing process with pid 1706191 00:22:44.838 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1706191 00:22:44.838 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1706191 00:22:44.838 Received shutdown signal, test time was about 0.912988 seconds 00:22:44.838 00:22:44.838 Latency(us) 00:22:44.838 [2024-12-10T13:24:45.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.838 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.838 Verification LBA range: start 0x0 length 0x400 00:22:44.838 Nvme1n1 : 0.89 291.74 18.23 0.00 0.00 215791.30 4587.52 213709.78 00:22:44.838 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.838 Verification LBA range: start 0x0 length 0x400 00:22:44.838 Nvme2n1 : 0.90 283.68 17.73 0.00 0.00 219313.25 18724.57 211712.49 00:22:44.838 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.838 Verification LBA range: start 0x0 length 0x400 00:22:44.838 Nvme3n1 : 0.88 290.37 18.15 0.00 0.00 210257.68 12483.05 211712.49 00:22:44.838 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.838 Verification LBA range: start 0x0 length 0x400 00:22:44.838 Nvme4n1 : 0.88 297.95 18.62 0.00 0.00 200325.37 3698.10 184749.10 00:22:44.838 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.838 Verification LBA range: start 0x0 length 0x400 00:22:44.838 Nvme5n1 : 0.91 282.40 17.65 0.00 0.00 208761.42 15478.98 217704.35 00:22:44.838 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.838 Verification LBA range: start 0x0 length 0x400 00:22:44.838 Nvme6n1 : 0.91 281.65 17.60 0.00 0.00 205443.66 18474.91 214708.42 00:22:44.838 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.838 Verification LBA range: start 0x0 length 0x400 00:22:44.838 Nvme7n1 : 0.89 286.82 17.93 0.00 0.00 197626.03 14417.92 216705.71 00:22:44.838 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.838 Verification LBA range: start 0x0 length 0x400 00:22:44.838 Nvme8n1 : 0.90 284.66 17.79 0.00 0.00 195391.76 13294.45 212711.13 00:22:44.838 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.838 Verification LBA range: start 0x0 length 0x400 00:22:44.838 Nvme9n1 : 0.87 219.70 13.73 0.00 0.00 247019.68 21845.33 223696.21 00:22:44.838 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.838 Verification LBA range: start 0x0 length 0x400 00:22:44.838 Nvme10n1 : 0.91 280.60 17.54 0.00 0.00 190939.18 17476.27 240673.16 00:22:44.838 [2024-12-10T13:24:45.578Z] =================================================================================================================== 00:22:44.838 [2024-12-10T13:24:45.578Z] Total : 2799.55 174.97 0.00 0.00 208110.97 3698.10 240673.16 00:22:45.097 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1705915 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:46.033 rmmod nvme_tcp 00:22:46.033 rmmod nvme_fabrics 00:22:46.033 rmmod nvme_keyring 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1705915 ']' 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1705915 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1705915 ']' 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1705915 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1705915 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1705915' 00:22:46.033 killing process with pid 1705915 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1705915 00:22:46.033 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1705915 00:22:46.602 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:46.602 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:46.602 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:46.602 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:46.602 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:46.602 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:46.602 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:46.602 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:46.602 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:46.602 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.602 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.602 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:48.508 00:22:48.508 real 0m8.025s 00:22:48.508 user 0m24.487s 00:22:48.508 sys 0m1.420s 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:48.508 ************************************ 00:22:48.508 END TEST nvmf_shutdown_tc2 00:22:48.508 ************************************ 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:48.508 ************************************ 00:22:48.508 START TEST nvmf_shutdown_tc3 00:22:48.508 ************************************ 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:48.508 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.509 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.768 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:48.768 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:48.768 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:48.768 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:48.768 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:48.768 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:48.768 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.768 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:48.768 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:48.768 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.768 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.768 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.768 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.768 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.768 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.768 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:48.768 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:48.768 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:48.769 Found net devices under 0000:af:00.0: cvl_0_0 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:48.769 Found net devices under 0000:af:00.1: cvl_0_1 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.769 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.028 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.028 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.028 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:22:49.028 00:22:49.028 --- 10.0.0.2 ping statistics --- 00:22:49.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.028 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:22:49.029 00:22:49.029 --- 10.0.0.1 ping statistics --- 00:22:49.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.029 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1707449 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1707449 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1707449 ']' 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.029 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.029 [2024-12-10 14:24:49.635615] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:22:49.029 [2024-12-10 14:24:49.635659] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.029 [2024-12-10 14:24:49.715564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.029 [2024-12-10 14:24:49.766055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.029 [2024-12-10 14:24:49.766101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.029 [2024-12-10 14:24:49.766113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.029 [2024-12-10 14:24:49.766122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.029 [2024-12-10 14:24:49.766130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.288 [2024-12-10 14:24:49.768255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.288 [2024-12-10 14:24:49.768364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.288 [2024-12-10 14:24:49.768472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:49.288 [2024-12-10 14:24:49.768473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.855 [2024-12-10 14:24:50.527175] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.855 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.114 Malloc1 00:22:50.114 [2024-12-10 14:24:50.639891] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.114 Malloc2 00:22:50.114 Malloc3 00:22:50.114 Malloc4 00:22:50.114 Malloc5 00:22:50.114 Malloc6 00:22:50.373 Malloc7 00:22:50.373 Malloc8 00:22:50.373 Malloc9 00:22:50.373 Malloc10 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1707723 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1707723 /var/tmp/bdevperf.sock 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1707723 ']' 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.373 { 00:22:50.373 "params": { 00:22:50.373 "name": "Nvme$subsystem", 00:22:50.373 "trtype": "$TEST_TRANSPORT", 00:22:50.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.373 "adrfam": "ipv4", 00:22:50.373 "trsvcid": "$NVMF_PORT", 00:22:50.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.373 "hdgst": ${hdgst:-false}, 00:22:50.373 "ddgst": ${ddgst:-false} 00:22:50.373 }, 00:22:50.373 "method": "bdev_nvme_attach_controller" 00:22:50.373 } 00:22:50.373 EOF 00:22:50.373 )") 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.373 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.373 { 00:22:50.373 "params": { 00:22:50.373 "name": "Nvme$subsystem", 00:22:50.374 "trtype": "$TEST_TRANSPORT", 00:22:50.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.374 "adrfam": "ipv4", 00:22:50.374 "trsvcid": "$NVMF_PORT", 00:22:50.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.374 "hdgst": ${hdgst:-false}, 00:22:50.374 "ddgst": ${ddgst:-false} 00:22:50.374 }, 00:22:50.374 "method": "bdev_nvme_attach_controller" 00:22:50.374 } 00:22:50.374 EOF 00:22:50.374 )") 00:22:50.374 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:50.374 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.374 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.374 { 00:22:50.374 "params": { 00:22:50.374 "name": "Nvme$subsystem", 00:22:50.374 "trtype": "$TEST_TRANSPORT", 00:22:50.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.374 "adrfam": "ipv4", 00:22:50.374 "trsvcid": "$NVMF_PORT", 00:22:50.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.374 "hdgst": ${hdgst:-false}, 00:22:50.374 "ddgst": ${ddgst:-false} 00:22:50.374 }, 00:22:50.374 "method": "bdev_nvme_attach_controller" 00:22:50.374 } 00:22:50.374 EOF 00:22:50.374 )") 00:22:50.374 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:50.374 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.374 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.374 { 00:22:50.374 "params": { 00:22:50.374 "name": "Nvme$subsystem", 00:22:50.374 "trtype": "$TEST_TRANSPORT", 00:22:50.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.374 "adrfam": "ipv4", 00:22:50.374 "trsvcid": "$NVMF_PORT", 00:22:50.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.374 "hdgst": ${hdgst:-false}, 00:22:50.374 "ddgst": ${ddgst:-false} 00:22:50.374 }, 00:22:50.374 "method": "bdev_nvme_attach_controller" 00:22:50.374 } 00:22:50.374 EOF 00:22:50.374 )") 00:22:50.374 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:50.374 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.374 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.374 { 00:22:50.374 "params": { 00:22:50.374 "name": "Nvme$subsystem", 00:22:50.374 "trtype": "$TEST_TRANSPORT", 00:22:50.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.374 "adrfam": "ipv4", 00:22:50.374 "trsvcid": "$NVMF_PORT", 00:22:50.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.374 "hdgst": ${hdgst:-false}, 00:22:50.374 "ddgst": ${ddgst:-false} 00:22:50.374 }, 00:22:50.374 "method": "bdev_nvme_attach_controller" 00:22:50.374 } 00:22:50.374 EOF 00:22:50.374 )") 00:22:50.374 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:50.374 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.374 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.374 { 00:22:50.374 "params": { 00:22:50.374 "name": "Nvme$subsystem", 00:22:50.374 "trtype": "$TEST_TRANSPORT", 00:22:50.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.374 "adrfam": "ipv4", 00:22:50.374 "trsvcid": "$NVMF_PORT", 00:22:50.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.374 "hdgst": ${hdgst:-false}, 00:22:50.374 "ddgst": ${ddgst:-false} 00:22:50.374 }, 00:22:50.374 "method": "bdev_nvme_attach_controller" 00:22:50.374 } 00:22:50.374 EOF 00:22:50.374 )") 00:22:50.374 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:50.374 [2024-12-10 14:24:51.111803] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:22:50.374 [2024-12-10 14:24:51.111854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707723 ] 00:22:50.633 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.633 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.633 { 00:22:50.633 "params": { 00:22:50.633 "name": "Nvme$subsystem", 00:22:50.633 "trtype": "$TEST_TRANSPORT", 00:22:50.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.633 "adrfam": "ipv4", 00:22:50.633 "trsvcid": "$NVMF_PORT", 00:22:50.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.633 "hdgst": ${hdgst:-false}, 00:22:50.633 "ddgst": ${ddgst:-false} 00:22:50.633 }, 00:22:50.633 "method": "bdev_nvme_attach_controller" 00:22:50.633 } 00:22:50.633 EOF 00:22:50.633 )") 00:22:50.633 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:50.633 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.633 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.633 { 00:22:50.633 "params": { 00:22:50.633 "name": "Nvme$subsystem", 00:22:50.633 "trtype": "$TEST_TRANSPORT", 00:22:50.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.633 "adrfam": "ipv4", 00:22:50.633 "trsvcid": "$NVMF_PORT", 00:22:50.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.633 "hdgst": ${hdgst:-false}, 00:22:50.633 "ddgst": ${ddgst:-false} 00:22:50.633 }, 00:22:50.633 "method": "bdev_nvme_attach_controller" 00:22:50.633 } 00:22:50.633 EOF 00:22:50.633 )") 00:22:50.634 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:50.634 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.634 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.634 { 00:22:50.634 "params": { 00:22:50.634 "name": "Nvme$subsystem", 00:22:50.634 "trtype": "$TEST_TRANSPORT", 00:22:50.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.634 "adrfam": "ipv4", 00:22:50.634 "trsvcid": "$NVMF_PORT", 00:22:50.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.634 "hdgst": ${hdgst:-false}, 00:22:50.634 "ddgst": ${ddgst:-false} 00:22:50.634 }, 00:22:50.634 "method": "bdev_nvme_attach_controller" 00:22:50.634 } 00:22:50.634 EOF 00:22:50.634 )") 00:22:50.634 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:50.634 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.634 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.634 { 00:22:50.634 "params": { 00:22:50.634 "name": "Nvme$subsystem", 00:22:50.634 "trtype": "$TEST_TRANSPORT", 00:22:50.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.634 "adrfam": "ipv4", 00:22:50.634 "trsvcid": "$NVMF_PORT", 00:22:50.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.634 "hdgst": ${hdgst:-false}, 00:22:50.634 "ddgst": ${ddgst:-false} 00:22:50.634 }, 00:22:50.634 "method": "bdev_nvme_attach_controller" 00:22:50.634 } 00:22:50.634 EOF 00:22:50.634 )") 00:22:50.634 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:50.634 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:50.634 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:50.634 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:50.634 "params": { 00:22:50.634 "name": "Nvme1", 00:22:50.634 "trtype": "tcp", 00:22:50.634 "traddr": "10.0.0.2", 00:22:50.634 "adrfam": "ipv4", 00:22:50.634 "trsvcid": "4420", 00:22:50.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.634 "hdgst": false, 00:22:50.634 "ddgst": false 00:22:50.634 }, 00:22:50.634 "method": "bdev_nvme_attach_controller" 00:22:50.634 },{ 00:22:50.634 "params": { 00:22:50.634 "name": "Nvme2", 00:22:50.634 "trtype": "tcp", 00:22:50.634 "traddr": "10.0.0.2", 00:22:50.634 "adrfam": "ipv4", 00:22:50.634 "trsvcid": "4420", 00:22:50.634 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:50.634 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:50.634 "hdgst": false, 00:22:50.634 "ddgst": false 00:22:50.634 }, 00:22:50.634 "method": "bdev_nvme_attach_controller" 00:22:50.634 },{ 00:22:50.634 "params": { 00:22:50.634 "name": "Nvme3", 00:22:50.634 "trtype": "tcp", 00:22:50.634 "traddr": "10.0.0.2", 00:22:50.634 "adrfam": "ipv4", 00:22:50.634 "trsvcid": "4420", 00:22:50.634 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:50.634 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:50.634 "hdgst": false, 00:22:50.634 "ddgst": false 00:22:50.634 }, 00:22:50.634 "method": "bdev_nvme_attach_controller" 00:22:50.634 },{ 00:22:50.634 "params": { 00:22:50.634 "name": "Nvme4", 00:22:50.634 "trtype": "tcp", 00:22:50.634 "traddr": "10.0.0.2", 00:22:50.634 "adrfam": "ipv4", 00:22:50.634 "trsvcid": "4420", 00:22:50.634 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:50.634 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:50.634 "hdgst": false, 00:22:50.634 "ddgst": false 00:22:50.634 }, 00:22:50.634 "method": "bdev_nvme_attach_controller" 00:22:50.634 },{ 00:22:50.634 "params": { 00:22:50.634 "name": "Nvme5", 00:22:50.634 "trtype": "tcp", 00:22:50.634 "traddr": "10.0.0.2", 00:22:50.634 "adrfam": "ipv4", 00:22:50.634 "trsvcid": "4420", 00:22:50.634 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:50.634 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:50.634 "hdgst": false, 00:22:50.634 "ddgst": false 00:22:50.634 }, 00:22:50.634 "method": "bdev_nvme_attach_controller" 00:22:50.634 },{ 00:22:50.634 "params": { 00:22:50.634 "name": "Nvme6", 00:22:50.634 "trtype": "tcp", 00:22:50.634 "traddr": "10.0.0.2", 00:22:50.634 "adrfam": "ipv4", 00:22:50.634 "trsvcid": "4420", 00:22:50.634 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:50.634 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:50.634 "hdgst": false, 00:22:50.634 "ddgst": false 00:22:50.634 }, 00:22:50.634 "method": "bdev_nvme_attach_controller" 00:22:50.634 },{ 00:22:50.634 "params": { 00:22:50.634 "name": "Nvme7", 00:22:50.634 "trtype": "tcp", 00:22:50.634 "traddr": "10.0.0.2", 00:22:50.634 "adrfam": "ipv4", 00:22:50.634 "trsvcid": "4420", 00:22:50.634 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:50.634 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:50.634 "hdgst": false, 00:22:50.634 "ddgst": false 00:22:50.634 }, 00:22:50.634 "method": "bdev_nvme_attach_controller" 00:22:50.634 },{ 00:22:50.634 "params": { 00:22:50.634 "name": "Nvme8", 00:22:50.634 "trtype": "tcp", 00:22:50.634 "traddr": "10.0.0.2", 00:22:50.634 "adrfam": "ipv4", 00:22:50.634 "trsvcid": "4420", 00:22:50.634 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:50.634 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:50.634 "hdgst": false, 00:22:50.634 "ddgst": false 00:22:50.634 }, 00:22:50.634 "method": "bdev_nvme_attach_controller" 00:22:50.634 },{ 00:22:50.634 "params": { 00:22:50.634 "name": "Nvme9", 00:22:50.634 "trtype": "tcp", 00:22:50.634 "traddr": "10.0.0.2", 00:22:50.634 "adrfam": "ipv4", 00:22:50.634 "trsvcid": "4420", 00:22:50.634 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:50.634 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:50.634 "hdgst": false, 00:22:50.634 "ddgst": false 00:22:50.634 }, 00:22:50.634 "method": "bdev_nvme_attach_controller" 00:22:50.634 },{ 00:22:50.634 "params": { 00:22:50.634 "name": "Nvme10", 00:22:50.634 "trtype": "tcp", 00:22:50.634 "traddr": "10.0.0.2", 00:22:50.634 "adrfam": "ipv4", 00:22:50.634 "trsvcid": "4420", 00:22:50.634 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:50.634 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:50.634 "hdgst": false, 00:22:50.634 "ddgst": false 00:22:50.634 }, 00:22:50.634 "method": "bdev_nvme_attach_controller" 00:22:50.634 }' 00:22:50.634 [2024-12-10 14:24:51.195401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.634 [2024-12-10 14:24:51.234948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.011 Running I/O for 10 seconds... 00:22:52.268 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.268 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:52.268 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:52.268 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.268 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.527 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.527 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:52.527 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:52.527 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:52.527 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:52.527 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:52.527 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:52.527 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:52.527 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:52.527 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:52.527 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:52.527 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.527 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.527 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.527 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:52.527 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:52.527 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:52.786 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:52.786 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:52.786 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:52.786 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:52.786 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.786 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.786 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.786 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:52.786 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:52.786 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1707449 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1707449 ']' 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1707449 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1707449 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1707449' 00:22:53.061 killing process with pid 1707449 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1707449 00:22:53.061 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1707449 00:22:53.061 [2024-12-10 14:24:53.711163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.061 [2024-12-10 14:24:53.711506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.711648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2418f50 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.712998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.062 [2024-12-10 14:24:53.713164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268d4e0 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.714659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419440 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.063 [2024-12-10 14:24:53.717207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.717462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419910 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.718298] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.064 [2024-12-10 14:24:53.718372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.064 [2024-12-10 14:24:53.718390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.064 [2024-12-10 14:24:53.718400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.064 [2024-12-10 14:24:53.718407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.064 [2024-12-10 14:24:53.718415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.064 [2024-12-10 14:24:53.718421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.064 [2024-12-10 14:24:53.718428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.064 [2024-12-10 14:24:53.718435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.064 [2024-12-10 14:24:53.718442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cac3c0 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.718480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.064 [2024-12-10 14:24:53.718489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.064 [2024-12-10 14:24:53.718496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.064 [2024-12-10 14:24:53.718504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.064 [2024-12-10 14:24:53.718511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.064 [2024-12-10 14:24:53.718517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.064 [2024-12-10 14:24:53.718525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.064 [2024-12-10 14:24:53.718532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.064 [2024-12-10 14:24:53.718538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857870 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.718566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.064 [2024-12-10 14:24:53.718575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.064 [2024-12-10 14:24:53.718583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.064 [2024-12-10 14:24:53.718590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.064 [2024-12-10 14:24:53.718596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.064 [2024-12-10 14:24:53.718603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.064 [2024-12-10 14:24:53.718610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.064 [2024-12-10 14:24:53.718617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.064 [2024-12-10 14:24:53.718623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18561b0 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.718648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.064 [2024-12-10 14:24:53.718641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.718658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.064 [2024-12-10 14:24:53.718665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.064 [2024-12-10 14:24:53.718667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.718673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.064 [2024-12-10 14:24:53.718676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.718681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.064 [2024-12-10 14:24:53.718684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.718690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.064 [2024-12-10 14:24:53.718691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.718699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-10 14:24:53.718699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:53.064 the state(6) to be set 00:22:53.064 [2024-12-10 14:24:53.718709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with [2024-12-10 14:24:53.718710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:22:53.064 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.065 [2024-12-10 14:24:53.718719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with [2024-12-10 14:24:53.718720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c790 is same the state(6) to be set 00:22:53.065 with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-10 14:24:53.718753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:53.065 the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with [2024-12-10 14:24:53.718763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:22:53.065 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.065 [2024-12-10 14:24:53.718771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with [2024-12-10 14:24:53.718773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:22:53.065 id:0 cdw10:00000000 cdw11:00000000 00:22:53.065 [2024-12-10 14:24:53.718784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with [2024-12-10 14:24:53.718784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:22:53.065 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.065 [2024-12-10 14:24:53.718793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.065 [2024-12-10 14:24:53.718800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.065 [2024-12-10 14:24:53.718808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.065 [2024-12-10 14:24:53.718815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.065 [2024-12-10 14:24:53.718823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1858750 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with [2024-12-10 14:24:53.718880] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.065 the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with [2024-12-10 14:24:53.718938] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.065 the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.718991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.065 [2024-12-10 14:24:53.719130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419e00 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.720998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.721133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a7c0 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.066 [2024-12-10 14:24:53.722282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.722526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ac90 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b180 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b180 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b180 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b180 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.067 [2024-12-10 14:24:53.723953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.068 [2024-12-10 14:24:53.723959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.068 [2024-12-10 14:24:53.723965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.068 [2024-12-10 14:24:53.723972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.068 [2024-12-10 14:24:53.723978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.068 [2024-12-10 14:24:53.723986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.068 [2024-12-10 14:24:53.723992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.068 [2024-12-10 14:24:53.723998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.068 [2024-12-10 14:24:53.724004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.068 [2024-12-10 14:24:53.724010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.068 [2024-12-10 14:24:53.730521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.730987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.730994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.731002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.731008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.731017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.731023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.731031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.731038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.731046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.731052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.731060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.731067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.731075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.068 [2024-12-10 14:24:53.731083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.068 [2024-12-10 14:24:53.731092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.069 [2024-12-10 14:24:53.731507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:53.069 [2024-12-10 14:24:53.731704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.069 [2024-12-10 14:24:53.731719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.069 [2024-12-10 14:24:53.731735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.069 [2024-12-10 14:24:53.731749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.069 [2024-12-10 14:24:53.731765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c831a0 is same with the state(6) to be set 00:22:53.069 [2024-12-10 14:24:53.731797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.069 [2024-12-10 14:24:53.731806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.069 [2024-12-10 14:24:53.731821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.069 [2024-12-10 14:24:53.731835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.069 [2024-12-10 14:24:53.731850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18582c0 is same with the state(6) to be set 00:22:53.069 [2024-12-10 14:24:53.731878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cac3c0 (9): Bad file descriptor 00:22:53.069 [2024-12-10 14:24:53.731907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.069 [2024-12-10 14:24:53.731917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.069 [2024-12-10 14:24:53.731931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.069 [2024-12-10 14:24:53.731938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.069 [2024-12-10 14:24:53.731945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.731952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.070 [2024-12-10 14:24:53.731965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.731971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d610 is same with the state(6) to be set 00:22:53.070 [2024-12-10 14:24:53.731985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1857870 (9): Bad file descriptor 00:22:53.070 [2024-12-10 14:24:53.732001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18561b0 (9): Bad file descriptor 00:22:53.070 [2024-12-10 14:24:53.732016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184c790 (9): Bad file descriptor 00:22:53.070 [2024-12-10 14:24:53.732039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.070 [2024-12-10 14:24:53.732047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.732056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.070 [2024-12-10 14:24:53.732062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.732070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.070 [2024-12-10 14:24:53.732076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.732083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.070 [2024-12-10 14:24:53.732090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.732096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb8120 is same with the state(6) to be set 00:22:53.070 [2024-12-10 14:24:53.732107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1858750 (9): Bad file descriptor 00:22:53.070 [2024-12-10 14:24:53.733238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.070 [2024-12-10 14:24:53.733249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.070 [2024-12-10 14:24:53.733256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.070 [2024-12-10 14:24:53.733263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.070 [2024-12-10 14:24:53.733269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.070 [2024-12-10 14:24:53.733276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.070 [2024-12-10 14:24:53.733282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.070 [2024-12-10 14:24:53.733288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.070 [2024-12-10 14:24:53.733295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.070 [2024-12-10 14:24:53.733301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b650 is same with the state(6) to be set 00:22:53.070 [2024-12-10 14:24:53.734693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.734987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.734993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.735001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.735008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.735015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.735022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.735031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.735037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.070 [2024-12-10 14:24:53.735046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.070 [2024-12-10 14:24:53.735052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.735580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.735591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.740621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.740631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.740642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.740649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.740658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.740665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.740673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.740680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.740689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.740696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.071 [2024-12-10 14:24:53.740704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.071 [2024-12-10 14:24:53.740711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.740719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.740726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.740737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.740743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.740752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5d7f0 is same with the state(6) to be set 00:22:53.072 [2024-12-10 14:24:53.744824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.744849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.744862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.744869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.744878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.744885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.744894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.744900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.744909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.744916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.744925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.744932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.744940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.744947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.744955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.744961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.744970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.744976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.744984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.744990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.072 [2024-12-10 14:24:53.745393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.072 [2024-12-10 14:24:53.745401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.745842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.745848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.753499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c831a0 (9): Bad file descriptor 00:22:53.073 [2024-12-10 14:24:53.753538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18582c0 (9): Bad file descriptor 00:22:53.073 [2024-12-10 14:24:53.753577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.073 [2024-12-10 14:24:53.753587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.753596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.073 [2024-12-10 14:24:53.753604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.753612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.073 [2024-12-10 14:24:53.753619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.753628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.073 [2024-12-10 14:24:53.753635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.753641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda610 is same with the state(6) to be set 00:22:53.073 [2024-12-10 14:24:53.753662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176d610 (9): Bad file descriptor 00:22:53.073 [2024-12-10 14:24:53.753687] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:53.073 [2024-12-10 14:24:53.753701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb8120 (9): Bad file descriptor 00:22:53.073 [2024-12-10 14:24:53.755754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:53.073 task offset: 37760 on job bdev=Nvme8n1 fails 00:22:53.073 2245.00 IOPS, 140.31 MiB/s [2024-12-10T13:24:53.813Z] [2024-12-10 14:24:53.755880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.755896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.755909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.755916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.755926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.755938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.073 [2024-12-10 14:24:53.755947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.073 [2024-12-10 14:24:53.755954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.755963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.755970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.755978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.755986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.755994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.074 [2024-12-10 14:24:53.756493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.074 [2024-12-10 14:24:53.756499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.756889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.756895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.757896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.757909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.757923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.757931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.757939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.757946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.757955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.757961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.757970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.757978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.757986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.757993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.758001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.758008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.758016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.758025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.758034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.758040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.758049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.758056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.758064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.758071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.758080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.758087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.758094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.758101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.075 [2024-12-10 14:24:53.758111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.075 [2024-12-10 14:24:53.758120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.076 [2024-12-10 14:24:53.758734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.076 [2024-12-10 14:24:53.758741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.758750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.758756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.758765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.758772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.758780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.758786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.758795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.758802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.758810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.758816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.758825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.758831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.758839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.758846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.758855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.758861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.758870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.758877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.758885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.758895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.077 [2024-12-10 14:24:53.760762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.077 [2024-12-10 14:24:53.760771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.760778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.760787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.760794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.760802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.760808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.760817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.760824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.760832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.760839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.760848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.760854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.760862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.760869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.760879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.760886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.760894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.760901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.760910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.760918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.760926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.760932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.760940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.760948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.760956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.760962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.760971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.760978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.760986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.760993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.761297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.761305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5f000 is same with the state(6) to be set 00:22:53.078 [2024-12-10 14:24:53.762265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:53.078 [2024-12-10 14:24:53.762285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:53.078 [2024-12-10 14:24:53.762561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.078 [2024-12-10 14:24:53.762576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb8120 with addr=10.0.0.2, port=4420 00:22:53.078 [2024-12-10 14:24:53.762585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb8120 is same with the state(6) to be set 00:22:53.078 [2024-12-10 14:24:53.762653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.078 [2024-12-10 14:24:53.762663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.078 [2024-12-10 14:24:53.762676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.762982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.762988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.763003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.763010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.763019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.763026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.763035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.763042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.763051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.763058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.763066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.763074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.763082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.763088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.763096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.763104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.763111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.763118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.763127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.763133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.763141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.763149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.763157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.763163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.763171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.763178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.079 [2024-12-10 14:24:53.763186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.079 [2024-12-10 14:24:53.763195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.080 [2024-12-10 14:24:53.763670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.080 [2024-12-10 14:24:53.763677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c740 is same with the state(6) to be set 00:22:53.080 [2024-12-10 14:24:53.765128] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.080 [2024-12-10 14:24:53.765189] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.080 [2024-12-10 14:24:53.765275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:53.080 [2024-12-10 14:24:53.765296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:53.080 [2024-12-10 14:24:53.765310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:53.080 [2024-12-10 14:24:53.765322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:53.080 [2024-12-10 14:24:53.765487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.080 [2024-12-10 14:24:53.765501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x184c790 with addr=10.0.0.2, port=4420 00:22:53.080 [2024-12-10 14:24:53.765511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c790 is same with the state(6) to be set 00:22:53.080 [2024-12-10 14:24:53.765647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.080 [2024-12-10 14:24:53.765658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c831a0 with addr=10.0.0.2, port=4420 00:22:53.080 [2024-12-10 14:24:53.765666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c831a0 is same with the state(6) to be set 00:22:53.080 [2024-12-10 14:24:53.765677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb8120 (9): Bad file descriptor 00:22:53.080 [2024-12-10 14:24:53.765699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cda610 (9): Bad file descriptor 00:22:53.080 [2024-12-10 14:24:53.766278] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.080 [2024-12-10 14:24:53.766665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.080 [2024-12-10 14:24:53.766683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18561b0 with addr=10.0.0.2, port=4420 00:22:53.080 [2024-12-10 14:24:53.766691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18561b0 is same with the state(6) to be set 00:22:53.080 [2024-12-10 14:24:53.766832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.080 [2024-12-10 14:24:53.766844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1857870 with addr=10.0.0.2, port=4420 00:22:53.080 [2024-12-10 14:24:53.766852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857870 is same with the state(6) to be set 00:22:53.080 [2024-12-10 14:24:53.766925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.080 [2024-12-10 14:24:53.766934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cac3c0 with addr=10.0.0.2, port=4420 00:22:53.080 [2024-12-10 14:24:53.766942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cac3c0 is same with the state(6) to be set 00:22:53.081 [2024-12-10 14:24:53.767023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.081 [2024-12-10 14:24:53.767034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1858750 with addr=10.0.0.2, port=4420 00:22:53.081 [2024-12-10 14:24:53.767041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1858750 is same with the state(6) to be set 00:22:53.081 [2024-12-10 14:24:53.767054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184c790 (9): Bad file descriptor 00:22:53.081 [2024-12-10 14:24:53.767065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c831a0 (9): Bad file descriptor 00:22:53.081 [2024-12-10 14:24:53.767073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:53.081 [2024-12-10 14:24:53.767081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:53.081 [2024-12-10 14:24:53.767090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:53.081 [2024-12-10 14:24:53.767099] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:53.081 [2024-12-10 14:24:53.767407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.081 [2024-12-10 14:24:53.767986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.081 [2024-12-10 14:24:53.767993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.768424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.768431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a470 is same with the state(6) to be set 00:22:53.082 [2024-12-10 14:24:53.769422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.769435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.769445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.769453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.769462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.769470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.769478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.769489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.769498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.769505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.769513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.769521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.769529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.769536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.769545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.769552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.769560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.769567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.769575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.769583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.769591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.769599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.769607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.769614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.082 [2024-12-10 14:24:53.769623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.082 [2024-12-10 14:24:53.769629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.769988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.769995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.770005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.770011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.770020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.770026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.770035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.770044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.770052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.770060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.770069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.770076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.770087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.770094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.770102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.770109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.770118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.770125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.770133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.770140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.770149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.770156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.770164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.770171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.770180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.770187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-12-10 14:24:53.770196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.083 [2024-12-10 14:24:53.770203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.084 [2024-12-10 14:24:53.770211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.084 [2024-12-10 14:24:53.770223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.084 [2024-12-10 14:24:53.770234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.084 [2024-12-10 14:24:53.770241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.084 [2024-12-10 14:24:53.770250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.084 [2024-12-10 14:24:53.770257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.084 [2024-12-10 14:24:53.770265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.084 [2024-12-10 14:24:53.770272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.084 [2024-12-10 14:24:53.770281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.084 [2024-12-10 14:24:53.770289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.084 [2024-12-10 14:24:53.770299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.084 [2024-12-10 14:24:53.770305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.084 [2024-12-10 14:24:53.770315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.084 [2024-12-10 14:24:53.770322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.084 [2024-12-10 14:24:53.770331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.084 [2024-12-10 14:24:53.770338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.084 [2024-12-10 14:24:53.770347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.084 [2024-12-10 14:24:53.770354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.084 [2024-12-10 14:24:53.770362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.084 [2024-12-10 14:24:53.770369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.084 [2024-12-10 14:24:53.770378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.084 [2024-12-10 14:24:53.770384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.084 [2024-12-10 14:24:53.770394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.084 [2024-12-10 14:24:53.770401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.084 [2024-12-10 14:24:53.770409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.084 [2024-12-10 14:24:53.770416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.084 [2024-12-10 14:24:53.770424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.084 [2024-12-10 14:24:53.770431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.084 [2024-12-10 14:24:53.770440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.084 [2024-12-10 14:24:53.770447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.084 [2024-12-10 14:24:53.770455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5b780 is same with the state(6) to be set 00:22:53.084 [2024-12-10 14:24:53.771448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:53.084 [2024-12-10 14:24:53.771467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:53.084 [2024-12-10 14:24:53.771491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18561b0 (9): Bad file descriptor 00:22:53.084 [2024-12-10 14:24:53.771504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1857870 (9): Bad file descriptor 00:22:53.084 [2024-12-10 14:24:53.771520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cac3c0 (9): Bad file descriptor 00:22:53.084 [2024-12-10 14:24:53.771530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1858750 (9): Bad file descriptor 00:22:53.084 [2024-12-10 14:24:53.771538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:53.084 [2024-12-10 14:24:53.771545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:53.084 [2024-12-10 14:24:53.771553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:53.084 [2024-12-10 14:24:53.771561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:53.084 [2024-12-10 14:24:53.771569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:53.084 [2024-12-10 14:24:53.771576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:53.084 [2024-12-10 14:24:53.771583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:53.084 [2024-12-10 14:24:53.771589] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:53.084 [2024-12-10 14:24:53.771662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:53.084 [2024-12-10 14:24:53.771773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.084 [2024-12-10 14:24:53.771785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18582c0 with addr=10.0.0.2, port=4420 00:22:53.084 [2024-12-10 14:24:53.771794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18582c0 is same with the state(6) to be set 00:22:53.084 [2024-12-10 14:24:53.772005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.084 [2024-12-10 14:24:53.772016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x176d610 with addr=10.0.0.2, port=4420 00:22:53.084 [2024-12-10 14:24:53.772024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d610 is same with the state(6) to be set 00:22:53.084 [2024-12-10 14:24:53.772031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:53.084 [2024-12-10 14:24:53.772037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:53.084 [2024-12-10 14:24:53.772045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:53.084 [2024-12-10 14:24:53.772052] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:53.084 [2024-12-10 14:24:53.772059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:53.084 [2024-12-10 14:24:53.772065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:53.084 [2024-12-10 14:24:53.772071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:53.084 [2024-12-10 14:24:53.772079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:53.084 [2024-12-10 14:24:53.772086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:53.084 [2024-12-10 14:24:53.772092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:53.084 [2024-12-10 14:24:53.772098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:53.084 [2024-12-10 14:24:53.772105] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:53.084 [2024-12-10 14:24:53.772115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:53.084 [2024-12-10 14:24:53.772121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:53.084 [2024-12-10 14:24:53.772128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:53.084 [2024-12-10 14:24:53.772135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:53.084 [2024-12-10 14:24:53.772726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.084 [2024-12-10 14:24:53.772742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb8120 with addr=10.0.0.2, port=4420 00:22:53.084 [2024-12-10 14:24:53.772749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb8120 is same with the state(6) to be set 00:22:53.084 [2024-12-10 14:24:53.772759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18582c0 (9): Bad file descriptor 00:22:53.084 [2024-12-10 14:24:53.772769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176d610 (9): Bad file descriptor 00:22:53.084 [2024-12-10 14:24:53.772821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb8120 (9): Bad file descriptor 00:22:53.084 [2024-12-10 14:24:53.772831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:53.084 [2024-12-10 14:24:53.772838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:53.084 [2024-12-10 14:24:53.772845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:53.084 [2024-12-10 14:24:53.772851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:53.084 [2024-12-10 14:24:53.772859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:53.084 [2024-12-10 14:24:53.772865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:53.084 [2024-12-10 14:24:53.772871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:53.084 [2024-12-10 14:24:53.772878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:53.084 [2024-12-10 14:24:53.772905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:53.084 [2024-12-10 14:24:53.772916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:53.084 [2024-12-10 14:24:53.772936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:53.084 [2024-12-10 14:24:53.772943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:53.084 [2024-12-10 14:24:53.772950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:53.084 [2024-12-10 14:24:53.772957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:53.085 [2024-12-10 14:24:53.773158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.085 [2024-12-10 14:24:53.773170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c831a0 with addr=10.0.0.2, port=4420 00:22:53.085 [2024-12-10 14:24:53.773178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c831a0 is same with the state(6) to be set 00:22:53.085 [2024-12-10 14:24:53.773390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.085 [2024-12-10 14:24:53.773401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x184c790 with addr=10.0.0.2, port=4420 00:22:53.085 [2024-12-10 14:24:53.773412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c790 is same with the state(6) to be set 00:22:53.085 [2024-12-10 14:24:53.773431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c831a0 (9): Bad file descriptor 00:22:53.085 [2024-12-10 14:24:53.773441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184c790 (9): Bad file descriptor 00:22:53.085 [2024-12-10 14:24:53.773459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:53.085 [2024-12-10 14:24:53.773466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:53.085 [2024-12-10 14:24:53.773473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:53.085 [2024-12-10 14:24:53.773479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:53.085 [2024-12-10 14:24:53.773487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:53.085 [2024-12-10 14:24:53.773493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:53.085 [2024-12-10 14:24:53.773500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:53.085 [2024-12-10 14:24:53.773506] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:53.085 [2024-12-10 14:24:53.775365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.085 [2024-12-10 14:24:53.775895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.085 [2024-12-10 14:24:53.775902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.775913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.775920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.775929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.775936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.775944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.775951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.775959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.775967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.775975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.775983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.775991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.775998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.086 [2024-12-10 14:24:53.776394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.086 [2024-12-10 14:24:53.776401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5dd50 is same with the state(6) to be set 00:22:53.346 00:22:53.346 Latency(us) 00:22:53.346 [2024-12-10T13:24:54.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.346 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.346 Job: Nvme1n1 ended in about 1.04 seconds with error 00:22:53.346 Verification LBA range: start 0x0 length 0x400 00:22:53.346 Nvme1n1 : 1.04 185.16 11.57 61.72 0.00 256952.08 18474.91 227690.79 00:22:53.346 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.346 Job: Nvme2n1 ended in about 1.03 seconds with error 00:22:53.346 Verification LBA range: start 0x0 length 0x400 00:22:53.346 Nvme2n1 : 1.03 254.13 15.88 62.32 0.00 197324.72 16227.96 202724.69 00:22:53.346 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.346 Job: Nvme3n1 ended in about 1.03 seconds with error 00:22:53.346 Verification LBA range: start 0x0 length 0x400 00:22:53.346 Nvme3n1 : 1.03 248.50 15.53 62.12 0.00 197964.36 13356.86 214708.42 00:22:53.346 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.346 Job: Nvme4n1 ended in about 1.03 seconds with error 00:22:53.346 Verification LBA range: start 0x0 length 0x400 00:22:53.346 Nvme4n1 : 1.03 186.02 11.63 62.01 0.00 244127.94 23717.79 227690.79 00:22:53.346 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.346 Job: Nvme5n1 ended in about 1.03 seconds with error 00:22:53.346 Verification LBA range: start 0x0 length 0x400 00:22:53.346 Nvme5n1 : 1.03 249.03 15.56 62.26 0.00 191343.57 15229.32 215707.06 00:22:53.346 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.346 Job: Nvme6n1 ended in about 1.04 seconds with error 00:22:53.346 Verification LBA range: start 0x0 length 0x400 00:22:53.346 Nvme6n1 : 1.04 184.32 11.52 61.44 0.00 238794.61 16727.28 210713.84 00:22:53.346 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.346 Job: Nvme7n1 ended in about 1.04 seconds with error 00:22:53.346 Verification LBA range: start 0x0 length 0x400 00:22:53.346 Nvme7n1 : 1.04 245.28 15.33 61.32 0.00 188263.28 16227.96 210713.84 00:22:53.346 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.346 Job: Nvme8n1 ended in about 1.02 seconds with error 00:22:53.347 Verification LBA range: start 0x0 length 0x400 00:22:53.347 Nvme8n1 : 1.02 250.20 15.64 62.55 0.00 181082.31 14792.41 213709.78 00:22:53.347 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.347 Job: Nvme9n1 ended in about 1.05 seconds with error 00:22:53.347 Verification LBA range: start 0x0 length 0x400 00:22:53.347 Nvme9n1 : 1.05 182.92 11.43 60.97 0.00 229147.31 16227.96 216705.71 00:22:53.347 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.347 Job: Nvme10n1 ended in about 1.03 seconds with error 00:22:53.347 Verification LBA range: start 0x0 length 0x400 00:22:53.347 Nvme10n1 : 1.03 185.58 11.60 61.86 0.00 221584.82 18474.91 233682.65 00:22:53.347 [2024-12-10T13:24:54.087Z] =================================================================================================================== 00:22:53.347 [2024-12-10T13:24:54.087Z] Total : 2171.14 135.70 618.57 0.00 212025.99 13356.86 233682.65 00:22:53.347 [2024-12-10 14:24:53.805173] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:53.347 [2024-12-10 14:24:53.805230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:53.347 [2024-12-10 14:24:53.805681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.347 [2024-12-10 14:24:53.805703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cda610 with addr=10.0.0.2, port=4420 00:22:53.347 [2024-12-10 14:24:53.805715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda610 is same with the state(6) to be set 00:22:53.347 [2024-12-10 14:24:53.805774] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:53.347 [2024-12-10 14:24:53.805788] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:53.347 [2024-12-10 14:24:53.805800] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:22:53.347 [2024-12-10 14:24:53.805810] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:53.347 [2024-12-10 14:24:53.806051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:53.347 [2024-12-10 14:24:53.806065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:53.347 [2024-12-10 14:24:53.806075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:53.347 [2024-12-10 14:24:53.806083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:53.347 [2024-12-10 14:24:53.806137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cda610 (9): Bad file descriptor 00:22:53.347 [2024-12-10 14:24:53.806184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:53.347 [2024-12-10 14:24:53.806197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:53.347 [2024-12-10 14:24:53.806206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:53.347 [2024-12-10 14:24:53.806431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.347 [2024-12-10 14:24:53.806445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1858750 with addr=10.0.0.2, port=4420 00:22:53.347 [2024-12-10 14:24:53.806460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1858750 is same with the state(6) to be set 00:22:53.347 [2024-12-10 14:24:53.806528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.347 [2024-12-10 14:24:53.806540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cac3c0 with addr=10.0.0.2, port=4420 00:22:53.347 [2024-12-10 14:24:53.806549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cac3c0 is same with the state(6) to be set 00:22:53.347 [2024-12-10 14:24:53.806696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.347 [2024-12-10 14:24:53.806707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1857870 with addr=10.0.0.2, port=4420 00:22:53.347 [2024-12-10 14:24:53.806715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857870 is same with the state(6) to be set 00:22:53.347 [2024-12-10 14:24:53.806937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.347 [2024-12-10 14:24:53.806950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18561b0 with addr=10.0.0.2, port=4420 00:22:53.347 [2024-12-10 14:24:53.806957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18561b0 is same with the state(6) to be set 00:22:53.347 [2024-12-10 14:24:53.806965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:53.347 [2024-12-10 14:24:53.806972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:53.347 [2024-12-10 14:24:53.806981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:53.347 [2024-12-10 14:24:53.806989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:53.347 [2024-12-10 14:24:53.807020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:53.347 [2024-12-10 14:24:53.807031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:53.347 [2024-12-10 14:24:53.807200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.347 [2024-12-10 14:24:53.807213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x176d610 with addr=10.0.0.2, port=4420 00:22:53.347 [2024-12-10 14:24:53.807226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d610 is same with the state(6) to be set 00:22:53.347 [2024-12-10 14:24:53.807393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.347 [2024-12-10 14:24:53.807404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18582c0 with addr=10.0.0.2, port=4420 00:22:53.347 [2024-12-10 14:24:53.807412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18582c0 is same with the state(6) to be set 00:22:53.347 [2024-12-10 14:24:53.807481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.347 [2024-12-10 14:24:53.807491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb8120 with addr=10.0.0.2, port=4420 00:22:53.347 [2024-12-10 14:24:53.807498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb8120 is same with the state(6) to be set 00:22:53.347 [2024-12-10 14:24:53.807509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1858750 (9): Bad file descriptor 00:22:53.347 [2024-12-10 14:24:53.807519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cac3c0 (9): Bad file descriptor 00:22:53.347 [2024-12-10 14:24:53.807527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1857870 (9): Bad file descriptor 00:22:53.347 [2024-12-10 14:24:53.807536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18561b0 (9): Bad file descriptor 00:22:53.347 [2024-12-10 14:24:53.807703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.347 [2024-12-10 14:24:53.807720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x184c790 with addr=10.0.0.2, port=4420 00:22:53.347 [2024-12-10 14:24:53.807728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c790 is same with the state(6) to be set 00:22:53.347 [2024-12-10 14:24:53.807815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.347 [2024-12-10 14:24:53.807828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c831a0 with addr=10.0.0.2, port=4420 00:22:53.347 [2024-12-10 14:24:53.807835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c831a0 is same with the state(6) to be set 00:22:53.347 [2024-12-10 14:24:53.807843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176d610 (9): Bad file descriptor 00:22:53.347 [2024-12-10 14:24:53.807852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18582c0 (9): Bad file descriptor 00:22:53.347 [2024-12-10 14:24:53.807861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb8120 (9): Bad file descriptor 00:22:53.347 [2024-12-10 14:24:53.807869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:53.347 [2024-12-10 14:24:53.807876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:53.347 [2024-12-10 14:24:53.807883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:53.347 [2024-12-10 14:24:53.807890] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:53.347 [2024-12-10 14:24:53.807898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:53.347 [2024-12-10 14:24:53.807905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:53.347 [2024-12-10 14:24:53.807912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:53.347 [2024-12-10 14:24:53.807918] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:53.347 [2024-12-10 14:24:53.807926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:53.347 [2024-12-10 14:24:53.807932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:53.347 [2024-12-10 14:24:53.807939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:53.347 [2024-12-10 14:24:53.807945] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:53.347 [2024-12-10 14:24:53.807952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:53.347 [2024-12-10 14:24:53.807958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:53.347 [2024-12-10 14:24:53.807964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:53.347 [2024-12-10 14:24:53.807970] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:53.347 [2024-12-10 14:24:53.807996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184c790 (9): Bad file descriptor 00:22:53.347 [2024-12-10 14:24:53.808007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c831a0 (9): Bad file descriptor 00:22:53.347 [2024-12-10 14:24:53.808016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:53.347 [2024-12-10 14:24:53.808022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:53.347 [2024-12-10 14:24:53.808030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:53.347 [2024-12-10 14:24:53.808037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:53.347 [2024-12-10 14:24:53.808044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:53.347 [2024-12-10 14:24:53.808051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:53.347 [2024-12-10 14:24:53.808057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:53.347 [2024-12-10 14:24:53.808063] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:53.348 [2024-12-10 14:24:53.808071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:53.348 [2024-12-10 14:24:53.808077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:53.348 [2024-12-10 14:24:53.808083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:53.348 [2024-12-10 14:24:53.808089] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:53.348 [2024-12-10 14:24:53.808111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:53.348 [2024-12-10 14:24:53.808119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:53.348 [2024-12-10 14:24:53.808126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:53.348 [2024-12-10 14:24:53.808133] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:53.348 [2024-12-10 14:24:53.808139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:53.348 [2024-12-10 14:24:53.808147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:53.348 [2024-12-10 14:24:53.808153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:53.348 [2024-12-10 14:24:53.808161] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:53.605 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1707723 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1707723 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1707723 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.544 rmmod nvme_tcp 00:22:54.544 rmmod nvme_fabrics 00:22:54.544 rmmod nvme_keyring 00:22:54.544 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1707449 ']' 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1707449 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1707449 ']' 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1707449 00:22:54.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1707449) - No such process 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1707449 is not found' 00:22:54.545 Process with pid 1707449 is not found 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.545 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:57.078 00:22:57.078 real 0m8.060s 00:22:57.078 user 0m20.195s 00:22:57.078 sys 0m1.441s 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.078 ************************************ 00:22:57.078 END TEST nvmf_shutdown_tc3 00:22:57.078 ************************************ 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:57.078 ************************************ 00:22:57.078 START TEST nvmf_shutdown_tc4 00:22:57.078 ************************************ 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:57.078 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.078 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:57.078 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:57.079 Found net devices under 0000:af:00.0: cvl_0_0 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:57.079 Found net devices under 0000:af:00.1: cvl_0_1 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:57.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:22:57.079 00:22:57.079 --- 10.0.0.2 ping statistics --- 00:22:57.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.079 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:22:57.079 00:22:57.079 --- 10.0.0.1 ping statistics --- 00:22:57.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.079 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1708979 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1708979 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1708979 ']' 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.079 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:57.338 [2024-12-10 14:24:57.829110] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:22:57.338 [2024-12-10 14:24:57.829152] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.338 [2024-12-10 14:24:57.914008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.338 [2024-12-10 14:24:57.954664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.338 [2024-12-10 14:24:57.954702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.338 [2024-12-10 14:24:57.954709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.338 [2024-12-10 14:24:57.954715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.338 [2024-12-10 14:24:57.954721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.338 [2024-12-10 14:24:57.956089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.338 [2024-12-10 14:24:57.956194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.338 [2024-12-10 14:24:57.956304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.338 [2024-12-10 14:24:57.956305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.271 [2024-12-10 14:24:58.702268] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.271 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.271 Malloc1 00:22:58.271 [2024-12-10 14:24:58.818394] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.271 Malloc2 00:22:58.271 Malloc3 00:22:58.271 Malloc4 00:22:58.271 Malloc5 00:22:58.271 Malloc6 00:22:58.529 Malloc7 00:22:58.529 Malloc8 00:22:58.529 Malloc9 00:22:58.529 Malloc10 00:22:58.529 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.529 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:58.529 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.529 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.529 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1709251 00:22:58.529 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:58.529 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:58.787 [2024-12-10 14:24:59.328440] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:04.058 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:04.058 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1708979 00:23:04.058 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1708979 ']' 00:23:04.058 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1708979 00:23:04.058 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:04.058 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.058 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1708979 00:23:04.058 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:04.058 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:04.058 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1708979' 00:23:04.059 killing process with pid 1708979 00:23:04.059 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1708979 00:23:04.059 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1708979 00:23:04.059 [2024-12-10 14:25:04.325742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b5070 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.325793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b5070 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.325807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b5070 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.325814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b5070 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.325820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b5070 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.325827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b5070 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.325833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b5070 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.325840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b5070 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.327022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b4200 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.327053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b4200 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.327061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b4200 is same with the state(6) to be set 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 [2024-12-10 14:25:04.327819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 [2024-12-10 14:25:04.328318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3390 is same with the state(6) to be set 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 [2024-12-10 14:25:04.328348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3390 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.328357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3390 is same with the state(6) to be set 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 [2024-12-10 14:25:04.328365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3390 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.328372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3390 is same with the state(6) to be set 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 [2024-12-10 14:25:04.328378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3390 is same with the state(6) to be set 00:23:04.059 starting I/O failed: -6 00:23:04.059 [2024-12-10 14:25:04.328384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3390 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.328392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3390 is same with the state(6) to be set 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 [2024-12-10 14:25:04.328398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3390 is same with the state(6) to be set 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 [2024-12-10 14:25:04.328685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3860 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.328712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3860 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.328714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.059 [2024-12-10 14:25:04.328721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3860 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.328729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3860 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.328736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3860 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.328742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3860 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.328750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3860 is same with the state(6) to be set 00:23:04.059 [2024-12-10 14:25:04.328761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3860 is same with the state(6) to be set 00:23:04.059 starting I/O failed: -6 00:23:04.059 [2024-12-10 14:25:04.328767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b3860 is same with the state(6) to be set 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.059 starting I/O failed: -6 00:23:04.059 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 [2024-12-10 14:25:04.329773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 [2024-12-10 14:25:04.331192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca6e0 is same with the state(6) to be set 00:23:04.060 [2024-12-10 14:25:04.331208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca6e0 is same with the state(6) to be set 00:23:04.060 [2024-12-10 14:25:04.331215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca6e0 is same with the state(6) to be set 00:23:04.060 [2024-12-10 14:25:04.331227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca6e0 is same with the state(6) to be set 00:23:04.060 [2024-12-10 14:25:04.331233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca6e0 is same with the state(6) to be set 00:23:04.060 [2024-12-10 14:25:04.331239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca6e0 is same with the state(6) to be set 00:23:04.060 [2024-12-10 14:25:04.331245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca6e0 is same with the state(6) to be set 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.060 [2024-12-10 14:25:04.331304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.060 NVMe io qpair process completion error 00:23:04.060 [2024-12-10 14:25:04.331549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18caa60 is same with the state(6) to be set 00:23:04.060 [2024-12-10 14:25:04.331565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18caa60 is same with the state(6) to be set 00:23:04.060 [2024-12-10 14:25:04.331572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18caa60 is same with the state(6) to be set 00:23:04.060 [2024-12-10 14:25:04.331579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18caa60 is same with the state(6) to be set 00:23:04.060 [2024-12-10 14:25:04.331586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18caa60 is same with the state(6) to be set 00:23:04.060 [2024-12-10 14:25:04.331592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18caa60 is same with the state(6) to be set 00:23:04.060 [2024-12-10 14:25:04.331598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18caa60 is same with the state(6) to be set 00:23:04.060 [2024-12-10 14:25:04.331605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18caa60 is same with the state(6) to be set 00:23:04.060 [2024-12-10 14:25:04.331612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18caa60 is same with the state(6) to be set 00:23:04.060 [2024-12-10 14:25:04.331618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18caa60 is same with the state(6) to be set 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 Write completed with error (sct=0, sc=8) 00:23:04.060 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 [2024-12-10 14:25:04.332007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cade0 is same with the state(6) to be set 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 [2024-12-10 14:25:04.332027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cade0 is same with the state(6) to be set 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 [2024-12-10 14:25:04.332035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cade0 is same with the state(6) to be set 00:23:04.061 starting I/O failed: -6 00:23:04.061 [2024-12-10 14:25:04.332042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cade0 is same with the state(6) to be set 00:23:04.061 [2024-12-10 14:25:04.332049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cade0 is same with the state(6) to be set 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 [2024-12-10 14:25:04.332055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cade0 is same with the state(6) to be set 00:23:04.061 [2024-12-10 14:25:04.332065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cade0 is same with the state(6) to be set 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 [2024-12-10 14:25:04.332073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cade0 is same with the state(6) to be set 00:23:04.061 [2024-12-10 14:25:04.332080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cade0 is same with the state(6) to be set 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 [2024-12-10 14:25:04.332255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.061 [2024-12-10 14:25:04.332360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338c0 is same with Write completed with error (sct=0, sc=8) 00:23:04.061 the state(6) to be set 00:23:04.061 [2024-12-10 14:25:04.332383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338c0 is same with Write completed with error (sct=0, sc=8) 00:23:04.061 the state(6) to be set 00:23:04.061 starting I/O failed: -6 00:23:04.061 [2024-12-10 14:25:04.332393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338c0 is same with the state(6) to be set 00:23:04.061 [2024-12-10 14:25:04.332400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338c0 is same with the state(6) to be set 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 [2024-12-10 14:25:04.332407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338c0 is same with the state(6) to be set 00:23:04.061 [2024-12-10 14:25:04.332413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338c0 is same with the state(6) to be set 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 [2024-12-10 14:25:04.333070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.061 starting I/O failed: -6 00:23:04.061 Write completed with error (sct=0, sc=8) 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 [2024-12-10 14:25:04.334151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 [2024-12-10 14:25:04.335985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.062 NVMe io qpair process completion error 00:23:04.062 [2024-12-10 14:25:04.336901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656c80 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.336923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656c80 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.336930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656c80 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.336938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656c80 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.336948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656c80 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.336954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656c80 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.336960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656c80 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.336967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656c80 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.336973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656c80 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.336979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656c80 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.336985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656c80 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.336992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656c80 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.337054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657170 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.337074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657170 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.337080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657170 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.337087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657170 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.337094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657170 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.337100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657170 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.337110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657170 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.337599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16567b0 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.337622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16567b0 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.337630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16567b0 is same with the state(6) to be set 00:23:04.062 [2024-12-10 14:25:04.337637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16567b0 is same with the state(6) to be set 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 Write completed with error (sct=0, sc=8) 00:23:04.062 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 [2024-12-10 14:25:04.338893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.063 starting I/O failed: -6 00:23:04.063 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 [2024-12-10 14:25:04.341751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.064 NVMe io qpair process completion error 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 [2024-12-10 14:25:04.342752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 [2024-12-10 14:25:04.343529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.064 Write completed with error (sct=0, sc=8) 00:23:04.064 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 [2024-12-10 14:25:04.344586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 [2024-12-10 14:25:04.346207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.065 NVMe io qpair process completion error 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 [2024-12-10 14:25:04.347203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 Write completed with error (sct=0, sc=8) 00:23:04.065 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 [2024-12-10 14:25:04.348063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 [2024-12-10 14:25:04.349396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.066 starting I/O failed: -6 00:23:04.066 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 [2024-12-10 14:25:04.353897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.067 NVMe io qpair process completion error 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 [2024-12-10 14:25:04.354924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 [2024-12-10 14:25:04.355826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 starting I/O failed: -6 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.067 [2024-12-10 14:25:04.356818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.067 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 [2024-12-10 14:25:04.360841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.068 NVMe io qpair process completion error 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 [2024-12-10 14:25:04.361800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.068 starting I/O failed: -6 00:23:04.068 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 [2024-12-10 14:25:04.362711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 [2024-12-10 14:25:04.363702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.069 Write completed with error (sct=0, sc=8) 00:23:04.069 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 [2024-12-10 14:25:04.365638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.070 NVMe io qpair process completion error 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 [2024-12-10 14:25:04.366629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 [2024-12-10 14:25:04.367421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 Write completed with error (sct=0, sc=8) 00:23:04.070 starting I/O failed: -6 00:23:04.070 [2024-12-10 14:25:04.368468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.070 starting I/O failed: -6 00:23:04.070 starting I/O failed: -6 00:23:04.070 starting I/O failed: -6 00:23:04.070 starting I/O failed: -6 00:23:04.070 starting I/O failed: -6 00:23:04.070 starting I/O failed: -6 00:23:04.071 starting I/O failed: -6 00:23:04.071 starting I/O failed: -6 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 [2024-12-10 14:25:04.373081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.071 NVMe io qpair process completion error 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.071 starting I/O failed: -6 00:23:04.071 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 [2024-12-10 14:25:04.380156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.072 NVMe io qpair process completion error 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 starting I/O failed: -6 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.072 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 [2024-12-10 14:25:04.381176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 [2024-12-10 14:25:04.382068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 [2024-12-10 14:25:04.383053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.073 starting I/O failed: -6 00:23:04.073 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 Write completed with error (sct=0, sc=8) 00:23:04.074 starting I/O failed: -6 00:23:04.074 [2024-12-10 14:25:04.384829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.074 NVMe io qpair process completion error 00:23:04.074 Initializing NVMe Controllers 00:23:04.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:04.074 Controller IO queue size 128, less than required. 00:23:04.074 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:04.074 Controller IO queue size 128, less than required. 00:23:04.074 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:04.074 Controller IO queue size 128, less than required. 00:23:04.074 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:04.074 Controller IO queue size 128, less than required. 00:23:04.074 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:04.074 Controller IO queue size 128, less than required. 00:23:04.074 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:04.074 Controller IO queue size 128, less than required. 00:23:04.074 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:04.074 Controller IO queue size 128, less than required. 00:23:04.074 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:04.074 Controller IO queue size 128, less than required. 00:23:04.074 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:04.074 Controller IO queue size 128, less than required. 00:23:04.074 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:04.074 Controller IO queue size 128, less than required. 00:23:04.074 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:04.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:04.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:04.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:04.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:04.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:04.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:04.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:04.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:04.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:04.074 Initialization complete. Launching workers. 00:23:04.074 ======================================================== 00:23:04.074 Latency(us) 00:23:04.074 Device Information : IOPS MiB/s Average min max 00:23:04.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2198.25 94.46 58233.22 732.62 123761.35 00:23:04.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2176.60 93.53 58110.45 952.78 105020.62 00:23:04.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2206.65 94.82 57328.57 851.61 102942.55 00:23:04.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2170.29 93.25 58318.01 509.34 101385.52 00:23:04.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2217.58 95.29 57085.95 680.28 100059.03 00:23:04.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2174.29 93.43 58236.38 887.76 99685.32 00:23:04.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2176.60 93.53 58223.32 719.57 106765.48 00:23:04.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2179.54 93.65 58183.71 913.05 97802.33 00:23:04.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2234.81 96.03 56759.57 728.50 96396.10 00:23:04.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2208.12 94.88 57487.70 664.38 118833.60 00:23:04.074 ======================================================== 00:23:04.074 Total : 21942.74 942.85 57791.81 509.34 123761.35 00:23:04.074 00:23:04.074 [2024-12-10 14:25:04.387792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2045890 is same with the state(6) to be set 00:23:04.074 [2024-12-10 14:25:04.387837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2047720 is same with the state(6) to be set 00:23:04.074 [2024-12-10 14:25:04.387867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2045560 is same with the state(6) to be set 00:23:04.074 [2024-12-10 14:25:04.387895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2046740 is same with the state(6) to be set 00:23:04.074 [2024-12-10 14:25:04.387922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2046a70 is same with the state(6) to be set 00:23:04.074 [2024-12-10 14:25:04.387952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2045ef0 is same with the state(6) to be set 00:23:04.074 [2024-12-10 14:25:04.387978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2046410 is same with the state(6) to be set 00:23:04.074 [2024-12-10 14:25:04.388006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2045bc0 is same with the state(6) to be set 00:23:04.074 [2024-12-10 14:25:04.388033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2047ae0 is same with the state(6) to be set 00:23:04.074 [2024-12-10 14:25:04.388061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2047900 is same with the state(6) to be set 00:23:04.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:04.074 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1709251 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1709251 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1709251 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:05.010 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:05.010 rmmod nvme_tcp 00:23:05.010 rmmod nvme_fabrics 00:23:05.269 rmmod nvme_keyring 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1708979 ']' 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1708979 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1708979 ']' 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1708979 00:23:05.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1708979) - No such process 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1708979 is not found' 00:23:05.270 Process with pid 1708979 is not found 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.270 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.172 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:07.172 00:23:07.172 real 0m10.489s 00:23:07.172 user 0m27.533s 00:23:07.172 sys 0m5.209s 00:23:07.172 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:07.172 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:07.172 ************************************ 00:23:07.172 END TEST nvmf_shutdown_tc4 00:23:07.172 ************************************ 00:23:07.172 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:07.172 00:23:07.172 real 0m43.059s 00:23:07.172 user 1m45.796s 00:23:07.172 sys 0m14.911s 00:23:07.172 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:07.172 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:07.172 ************************************ 00:23:07.172 END TEST nvmf_shutdown 00:23:07.172 ************************************ 00:23:07.431 14:25:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:07.431 14:25:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:07.431 14:25:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:07.431 14:25:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:07.431 ************************************ 00:23:07.431 START TEST nvmf_nsid 00:23:07.431 ************************************ 00:23:07.431 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:07.431 * Looking for test storage... 00:23:07.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.431 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:07.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.432 --rc genhtml_branch_coverage=1 00:23:07.432 --rc genhtml_function_coverage=1 00:23:07.432 --rc genhtml_legend=1 00:23:07.432 --rc geninfo_all_blocks=1 00:23:07.432 --rc geninfo_unexecuted_blocks=1 00:23:07.432 00:23:07.432 ' 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:07.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.432 --rc genhtml_branch_coverage=1 00:23:07.432 --rc genhtml_function_coverage=1 00:23:07.432 --rc genhtml_legend=1 00:23:07.432 --rc geninfo_all_blocks=1 00:23:07.432 --rc geninfo_unexecuted_blocks=1 00:23:07.432 00:23:07.432 ' 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:07.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.432 --rc genhtml_branch_coverage=1 00:23:07.432 --rc genhtml_function_coverage=1 00:23:07.432 --rc genhtml_legend=1 00:23:07.432 --rc geninfo_all_blocks=1 00:23:07.432 --rc geninfo_unexecuted_blocks=1 00:23:07.432 00:23:07.432 ' 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:07.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.432 --rc genhtml_branch_coverage=1 00:23:07.432 --rc genhtml_function_coverage=1 00:23:07.432 --rc genhtml_legend=1 00:23:07.432 --rc geninfo_all_blocks=1 00:23:07.432 --rc geninfo_unexecuted_blocks=1 00:23:07.432 00:23:07.432 ' 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.432 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:07.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:07.691 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:07.692 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:14.261 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:14.261 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:14.261 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:14.262 Found net devices under 0000:af:00.0: cvl_0_0 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:14.262 Found net devices under 0000:af:00.1: cvl_0_1 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:14.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:23:14.262 00:23:14.262 --- 10.0.0.2 ping statistics --- 00:23:14.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.262 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:23:14.262 00:23:14.262 --- 10.0.0.1 ping statistics --- 00:23:14.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.262 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1714184 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1714184 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1714184 ']' 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.262 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:14.262 [2024-12-10 14:25:14.962771] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:23:14.262 [2024-12-10 14:25:14.962816] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.522 [2024-12-10 14:25:15.046592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.523 [2024-12-10 14:25:15.085830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.523 [2024-12-10 14:25:15.085862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.523 [2024-12-10 14:25:15.085869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.523 [2024-12-10 14:25:15.085876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.523 [2024-12-10 14:25:15.085882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.523 [2024-12-10 14:25:15.086442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.089 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.089 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:15.089 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:15.089 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.089 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:15.089 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.089 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1714217 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=beced54b-2ef4-4f0e-8184-cde13bf291a4 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=fac3c870-20ff-4c1e-b24e-36fa0c974ea3 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=a5c08b47-265c-4419-9059-b569047bd3a4 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.348 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:15.348 null0 00:23:15.348 null1 00:23:15.348 [2024-12-10 14:25:15.883072] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:23:15.348 [2024-12-10 14:25:15.883114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714217 ] 00:23:15.348 null2 00:23:15.349 [2024-12-10 14:25:15.888673] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.349 [2024-12-10 14:25:15.912848] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.349 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.349 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1714217 /var/tmp/tgt2.sock 00:23:15.349 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1714217 ']' 00:23:15.349 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:15.349 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.349 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:15.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:15.349 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.349 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:15.349 [2024-12-10 14:25:15.964574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.349 [2024-12-10 14:25:16.003743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.688 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.688 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:15.688 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:15.999 [2024-12-10 14:25:16.539245] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.999 [2024-12-10 14:25:16.555355] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:15.999 nvme0n1 nvme0n2 00:23:15.999 nvme1n1 00:23:15.999 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:15.999 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:15.999 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 00:23:16.954 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:16.954 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:16.954 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:16.954 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:16.954 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:16.954 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:16.954 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:16.954 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:16.954 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:16.954 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:16.954 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:16.954 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:16.954 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid beced54b-2ef4-4f0e-8184-cde13bf291a4 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=beced54b2ef44f0e8184cde13bf291a4 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo BECED54B2EF44F0E8184CDE13BF291A4 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ BECED54B2EF44F0E8184CDE13BF291A4 == \B\E\C\E\D\5\4\B\2\E\F\4\4\F\0\E\8\1\8\4\C\D\E\1\3\B\F\2\9\1\A\4 ]] 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid fac3c870-20ff-4c1e-b24e-36fa0c974ea3 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fac3c87020ff4c1eb24e36fa0c974ea3 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FAC3C87020FF4C1EB24E36FA0C974EA3 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ FAC3C87020FF4C1EB24E36FA0C974EA3 == \F\A\C\3\C\8\7\0\2\0\F\F\4\C\1\E\B\2\4\E\3\6\F\A\0\C\9\7\4\E\A\3 ]] 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid a5c08b47-265c-4419-9059-b569047bd3a4 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:18.329 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:18.330 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:18.330 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a5c08b47265c44199059b569047bd3a4 00:23:18.330 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A5C08B47265C44199059B569047BD3A4 00:23:18.330 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ A5C08B47265C44199059B569047BD3A4 == \A\5\C\0\8\B\4\7\2\6\5\C\4\4\1\9\9\0\5\9\B\5\6\9\0\4\7\B\D\3\A\4 ]] 00:23:18.330 14:25:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:18.589 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:18.589 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:18.589 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1714217 00:23:18.589 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1714217 ']' 00:23:18.589 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1714217 00:23:18.589 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:18.589 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.589 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1714217 00:23:18.589 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:18.589 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:18.589 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1714217' 00:23:18.589 killing process with pid 1714217 00:23:18.589 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1714217 00:23:18.589 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1714217 00:23:18.847 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:18.847 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:18.847 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:18.847 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:18.847 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:18.847 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:18.847 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:18.847 rmmod nvme_tcp 00:23:18.847 rmmod nvme_fabrics 00:23:18.847 rmmod nvme_keyring 00:23:18.847 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:18.847 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:18.847 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:18.847 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1714184 ']' 00:23:18.847 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1714184 00:23:18.847 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1714184 ']' 00:23:18.847 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1714184 00:23:18.847 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:18.847 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.847 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1714184 00:23:19.106 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:19.106 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:19.106 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1714184' 00:23:19.106 killing process with pid 1714184 00:23:19.106 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1714184 00:23:19.106 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1714184 00:23:19.106 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:19.106 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:19.106 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:19.106 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:19.106 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:19.106 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:19.106 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:19.106 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:19.106 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:19.106 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.106 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.106 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.640 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:21.640 00:23:21.640 real 0m13.852s 00:23:21.640 user 0m10.761s 00:23:21.640 sys 0m6.118s 00:23:21.640 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.640 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:21.640 ************************************ 00:23:21.640 END TEST nvmf_nsid 00:23:21.640 ************************************ 00:23:21.640 14:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:21.640 00:23:21.640 real 12m26.402s 00:23:21.640 user 26m4.724s 00:23:21.640 sys 3m58.521s 00:23:21.640 14:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.640 14:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:21.640 ************************************ 00:23:21.640 END TEST nvmf_target_extra 00:23:21.640 ************************************ 00:23:21.640 14:25:21 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:21.640 14:25:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:21.640 14:25:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.640 14:25:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:21.640 ************************************ 00:23:21.640 START TEST nvmf_host 00:23:21.640 ************************************ 00:23:21.640 14:25:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:21.640 * Looking for test storage... 00:23:21.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.640 14:25:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:21.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.640 --rc genhtml_branch_coverage=1 00:23:21.640 --rc genhtml_function_coverage=1 00:23:21.640 --rc genhtml_legend=1 00:23:21.640 --rc geninfo_all_blocks=1 00:23:21.640 --rc geninfo_unexecuted_blocks=1 00:23:21.640 00:23:21.641 ' 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:21.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.641 --rc genhtml_branch_coverage=1 00:23:21.641 --rc genhtml_function_coverage=1 00:23:21.641 --rc genhtml_legend=1 00:23:21.641 --rc geninfo_all_blocks=1 00:23:21.641 --rc geninfo_unexecuted_blocks=1 00:23:21.641 00:23:21.641 ' 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:21.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.641 --rc genhtml_branch_coverage=1 00:23:21.641 --rc genhtml_function_coverage=1 00:23:21.641 --rc genhtml_legend=1 00:23:21.641 --rc geninfo_all_blocks=1 00:23:21.641 --rc geninfo_unexecuted_blocks=1 00:23:21.641 00:23:21.641 ' 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:21.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.641 --rc genhtml_branch_coverage=1 00:23:21.641 --rc genhtml_function_coverage=1 00:23:21.641 --rc genhtml_legend=1 00:23:21.641 --rc geninfo_all_blocks=1 00:23:21.641 --rc geninfo_unexecuted_blocks=1 00:23:21.641 00:23:21.641 ' 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:21.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.641 ************************************ 00:23:21.641 START TEST nvmf_multicontroller 00:23:21.641 ************************************ 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:21.641 * Looking for test storage... 00:23:21.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.641 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:21.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.641 --rc genhtml_branch_coverage=1 00:23:21.641 --rc genhtml_function_coverage=1 00:23:21.641 --rc genhtml_legend=1 00:23:21.641 --rc geninfo_all_blocks=1 00:23:21.641 --rc geninfo_unexecuted_blocks=1 00:23:21.642 00:23:21.642 ' 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:21.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.642 --rc genhtml_branch_coverage=1 00:23:21.642 --rc genhtml_function_coverage=1 00:23:21.642 --rc genhtml_legend=1 00:23:21.642 --rc geninfo_all_blocks=1 00:23:21.642 --rc geninfo_unexecuted_blocks=1 00:23:21.642 00:23:21.642 ' 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:21.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.642 --rc genhtml_branch_coverage=1 00:23:21.642 --rc genhtml_function_coverage=1 00:23:21.642 --rc genhtml_legend=1 00:23:21.642 --rc geninfo_all_blocks=1 00:23:21.642 --rc geninfo_unexecuted_blocks=1 00:23:21.642 00:23:21.642 ' 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:21.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.642 --rc genhtml_branch_coverage=1 00:23:21.642 --rc genhtml_function_coverage=1 00:23:21.642 --rc genhtml_legend=1 00:23:21.642 --rc geninfo_all_blocks=1 00:23:21.642 --rc geninfo_unexecuted_blocks=1 00:23:21.642 00:23:21.642 ' 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.642 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:21.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:21.901 14:25:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:28.471 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:28.471 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:28.471 Found net devices under 0000:af:00.0: cvl_0_0 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.471 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:28.472 Found net devices under 0000:af:00.1: cvl_0_1 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.472 14:25:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:28.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:23:28.472 00:23:28.472 --- 10.0.0.2 ping statistics --- 00:23:28.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.472 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:23:28.472 00:23:28.472 --- 10.0.0.1 ping statistics --- 00:23:28.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.472 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1718995 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1718995 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1718995 ']' 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.472 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.731 [2024-12-10 14:25:29.214373] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:23:28.731 [2024-12-10 14:25:29.214418] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.731 [2024-12-10 14:25:29.299169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:28.731 [2024-12-10 14:25:29.337564] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.731 [2024-12-10 14:25:29.337601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.731 [2024-12-10 14:25:29.337608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.731 [2024-12-10 14:25:29.337614] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.731 [2024-12-10 14:25:29.337618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.731 [2024-12-10 14:25:29.338908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.731 [2024-12-10 14:25:29.338996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.731 [2024-12-10 14:25:29.338998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.731 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.731 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:28.731 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:28.731 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:28.731 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.989 [2024-12-10 14:25:29.482379] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.989 Malloc0 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.989 [2024-12-10 14:25:29.543925] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.989 [2024-12-10 14:25:29.551838] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.989 Malloc1 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1719020 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1719020 /var/tmp/bdevperf.sock 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1719020 ']' 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.989 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.248 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.248 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:29.248 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:29.248 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.248 14:25:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.506 NVMe0n1 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.506 1 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.506 request: 00:23:29.506 { 00:23:29.506 "name": "NVMe0", 00:23:29.506 "trtype": "tcp", 00:23:29.506 "traddr": "10.0.0.2", 00:23:29.506 "adrfam": "ipv4", 00:23:29.506 "trsvcid": "4420", 00:23:29.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.506 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:29.506 "hostaddr": "10.0.0.1", 00:23:29.506 "prchk_reftag": false, 00:23:29.506 "prchk_guard": false, 00:23:29.506 "hdgst": false, 00:23:29.506 "ddgst": false, 00:23:29.506 "allow_unrecognized_csi": false, 00:23:29.506 "method": "bdev_nvme_attach_controller", 00:23:29.506 "req_id": 1 00:23:29.506 } 00:23:29.506 Got JSON-RPC error response 00:23:29.506 response: 00:23:29.506 { 00:23:29.506 "code": -114, 00:23:29.506 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:29.506 } 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:29.506 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.507 request: 00:23:29.507 { 00:23:29.507 "name": "NVMe0", 00:23:29.507 "trtype": "tcp", 00:23:29.507 "traddr": "10.0.0.2", 00:23:29.507 "adrfam": "ipv4", 00:23:29.507 "trsvcid": "4420", 00:23:29.507 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:29.507 "hostaddr": "10.0.0.1", 00:23:29.507 "prchk_reftag": false, 00:23:29.507 "prchk_guard": false, 00:23:29.507 "hdgst": false, 00:23:29.507 "ddgst": false, 00:23:29.507 "allow_unrecognized_csi": false, 00:23:29.507 "method": "bdev_nvme_attach_controller", 00:23:29.507 "req_id": 1 00:23:29.507 } 00:23:29.507 Got JSON-RPC error response 00:23:29.507 response: 00:23:29.507 { 00:23:29.507 "code": -114, 00:23:29.507 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:29.507 } 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.507 request: 00:23:29.507 { 00:23:29.507 "name": "NVMe0", 00:23:29.507 "trtype": "tcp", 00:23:29.507 "traddr": "10.0.0.2", 00:23:29.507 "adrfam": "ipv4", 00:23:29.507 "trsvcid": "4420", 00:23:29.507 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.507 "hostaddr": "10.0.0.1", 00:23:29.507 "prchk_reftag": false, 00:23:29.507 "prchk_guard": false, 00:23:29.507 "hdgst": false, 00:23:29.507 "ddgst": false, 00:23:29.507 "multipath": "disable", 00:23:29.507 "allow_unrecognized_csi": false, 00:23:29.507 "method": "bdev_nvme_attach_controller", 00:23:29.507 "req_id": 1 00:23:29.507 } 00:23:29.507 Got JSON-RPC error response 00:23:29.507 response: 00:23:29.507 { 00:23:29.507 "code": -114, 00:23:29.507 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:29.507 } 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.507 request: 00:23:29.507 { 00:23:29.507 "name": "NVMe0", 00:23:29.507 "trtype": "tcp", 00:23:29.507 "traddr": "10.0.0.2", 00:23:29.507 "adrfam": "ipv4", 00:23:29.507 "trsvcid": "4420", 00:23:29.507 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.507 "hostaddr": "10.0.0.1", 00:23:29.507 "prchk_reftag": false, 00:23:29.507 "prchk_guard": false, 00:23:29.507 "hdgst": false, 00:23:29.507 "ddgst": false, 00:23:29.507 "multipath": "failover", 00:23:29.507 "allow_unrecognized_csi": false, 00:23:29.507 "method": "bdev_nvme_attach_controller", 00:23:29.507 "req_id": 1 00:23:29.507 } 00:23:29.507 Got JSON-RPC error response 00:23:29.507 response: 00:23:29.507 { 00:23:29.507 "code": -114, 00:23:29.507 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:29.507 } 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.507 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.765 NVMe0n1 00:23:29.765 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.765 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:29.765 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.765 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.765 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.765 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:29.765 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.765 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.765 00:23:29.765 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.765 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:29.765 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:29.765 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.765 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.765 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.765 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:29.765 14:25:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:31.138 { 00:23:31.138 "results": [ 00:23:31.138 { 00:23:31.138 "job": "NVMe0n1", 00:23:31.138 "core_mask": "0x1", 00:23:31.138 "workload": "write", 00:23:31.138 "status": "finished", 00:23:31.138 "queue_depth": 128, 00:23:31.138 "io_size": 4096, 00:23:31.138 "runtime": 1.003947, 00:23:31.138 "iops": 25434.609595924885, 00:23:31.138 "mibps": 99.35394373408158, 00:23:31.138 "io_failed": 0, 00:23:31.138 "io_timeout": 0, 00:23:31.138 "avg_latency_us": 5026.413883763648, 00:23:31.138 "min_latency_us": 2980.327619047619, 00:23:31.138 "max_latency_us": 9986.438095238096 00:23:31.138 } 00:23:31.138 ], 00:23:31.138 "core_count": 1 00:23:31.138 } 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1719020 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1719020 ']' 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1719020 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1719020 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1719020' 00:23:31.138 killing process with pid 1719020 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1719020 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1719020 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:31.138 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:31.139 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:31.139 [2024-12-10 14:25:29.652911] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:23:31.139 [2024-12-10 14:25:29.652960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719020 ] 00:23:31.139 [2024-12-10 14:25:29.733251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.139 [2024-12-10 14:25:29.773882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.139 [2024-12-10 14:25:30.346596] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name c92e489d-9d38-4779-9b8d-4910c86fc3dd already exists 00:23:31.139 [2024-12-10 14:25:30.346623] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:c92e489d-9d38-4779-9b8d-4910c86fc3dd alias for bdev NVMe1n1 00:23:31.139 [2024-12-10 14:25:30.346631] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:31.139 Running I/O for 1 seconds... 00:23:31.139 25407.00 IOPS, 99.25 MiB/s 00:23:31.139 Latency(us) 00:23:31.139 [2024-12-10T13:25:31.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.139 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:31.139 NVMe0n1 : 1.00 25434.61 99.35 0.00 0.00 5026.41 2980.33 9986.44 00:23:31.139 [2024-12-10T13:25:31.879Z] =================================================================================================================== 00:23:31.139 [2024-12-10T13:25:31.879Z] Total : 25434.61 99.35 0.00 0.00 5026.41 2980.33 9986.44 00:23:31.139 Received shutdown signal, test time was about 1.000000 seconds 00:23:31.139 00:23:31.139 Latency(us) 00:23:31.139 [2024-12-10T13:25:31.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.139 [2024-12-10T13:25:31.879Z] =================================================================================================================== 00:23:31.139 [2024-12-10T13:25:31.879Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:31.139 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:31.139 rmmod nvme_tcp 00:23:31.139 rmmod nvme_fabrics 00:23:31.139 rmmod nvme_keyring 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1718995 ']' 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1718995 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1718995 ']' 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1718995 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.139 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1718995 00:23:31.398 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:31.398 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:31.398 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1718995' 00:23:31.398 killing process with pid 1718995 00:23:31.398 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1718995 00:23:31.398 14:25:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1718995 00:23:31.398 14:25:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:31.398 14:25:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:31.398 14:25:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:31.398 14:25:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:31.398 14:25:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:31.398 14:25:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:31.398 14:25:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:31.398 14:25:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:31.398 14:25:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:31.398 14:25:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.398 14:25:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.398 14:25:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:33.932 00:23:33.932 real 0m11.967s 00:23:33.932 user 0m12.360s 00:23:33.932 sys 0m5.817s 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.932 ************************************ 00:23:33.932 END TEST nvmf_multicontroller 00:23:33.932 ************************************ 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.932 ************************************ 00:23:33.932 START TEST nvmf_aer 00:23:33.932 ************************************ 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:33.932 * Looking for test storage... 00:23:33.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:33.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.932 --rc genhtml_branch_coverage=1 00:23:33.932 --rc genhtml_function_coverage=1 00:23:33.932 --rc genhtml_legend=1 00:23:33.932 --rc geninfo_all_blocks=1 00:23:33.932 --rc geninfo_unexecuted_blocks=1 00:23:33.932 00:23:33.932 ' 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:33.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.932 --rc genhtml_branch_coverage=1 00:23:33.932 --rc genhtml_function_coverage=1 00:23:33.932 --rc genhtml_legend=1 00:23:33.932 --rc geninfo_all_blocks=1 00:23:33.932 --rc geninfo_unexecuted_blocks=1 00:23:33.932 00:23:33.932 ' 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:33.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.932 --rc genhtml_branch_coverage=1 00:23:33.932 --rc genhtml_function_coverage=1 00:23:33.932 --rc genhtml_legend=1 00:23:33.932 --rc geninfo_all_blocks=1 00:23:33.932 --rc geninfo_unexecuted_blocks=1 00:23:33.932 00:23:33.932 ' 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:33.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.932 --rc genhtml_branch_coverage=1 00:23:33.932 --rc genhtml_function_coverage=1 00:23:33.932 --rc genhtml_legend=1 00:23:33.932 --rc geninfo_all_blocks=1 00:23:33.932 --rc geninfo_unexecuted_blocks=1 00:23:33.932 00:23:33.932 ' 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.932 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:33.933 14:25:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:40.501 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:40.501 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:40.501 Found net devices under 0000:af:00.0: cvl_0_0 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:40.501 Found net devices under 0000:af:00.1: cvl_0_1 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.501 14:25:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.501 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.501 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.501 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:40.501 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.501 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.501 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.501 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:40.501 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:40.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:23:40.501 00:23:40.501 --- 10.0.0.2 ping statistics --- 00:23:40.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.501 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:23:40.501 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:23:40.501 00:23:40.502 --- 10.0.0.1 ping statistics --- 00:23:40.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.502 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1723280 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1723280 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1723280 ']' 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.502 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.760 [2024-12-10 14:25:41.277833] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:23:40.760 [2024-12-10 14:25:41.277875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.760 [2024-12-10 14:25:41.345981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:40.760 [2024-12-10 14:25:41.387596] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.760 [2024-12-10 14:25:41.387632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.760 [2024-12-10 14:25:41.387639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.760 [2024-12-10 14:25:41.387645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.760 [2024-12-10 14:25:41.387650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.760 [2024-12-10 14:25:41.392234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.760 [2024-12-10 14:25:41.392271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.760 [2024-12-10 14:25:41.392380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.760 [2024-12-10 14:25:41.392381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:40.760 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.760 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:40.760 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:40.760 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:40.760 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.018 [2024-12-10 14:25:41.529020] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.018 Malloc0 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.018 [2024-12-10 14:25:41.590759] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.018 [ 00:23:41.018 { 00:23:41.018 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:41.018 "subtype": "Discovery", 00:23:41.018 "listen_addresses": [], 00:23:41.018 "allow_any_host": true, 00:23:41.018 "hosts": [] 00:23:41.018 }, 00:23:41.018 { 00:23:41.018 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.018 "subtype": "NVMe", 00:23:41.018 "listen_addresses": [ 00:23:41.018 { 00:23:41.018 "trtype": "TCP", 00:23:41.018 "adrfam": "IPv4", 00:23:41.018 "traddr": "10.0.0.2", 00:23:41.018 "trsvcid": "4420" 00:23:41.018 } 00:23:41.018 ], 00:23:41.018 "allow_any_host": true, 00:23:41.018 "hosts": [], 00:23:41.018 "serial_number": "SPDK00000000000001", 00:23:41.018 "model_number": "SPDK bdev Controller", 00:23:41.018 "max_namespaces": 2, 00:23:41.018 "min_cntlid": 1, 00:23:41.018 "max_cntlid": 65519, 00:23:41.018 "namespaces": [ 00:23:41.018 { 00:23:41.018 "nsid": 1, 00:23:41.018 "bdev_name": "Malloc0", 00:23:41.018 "name": "Malloc0", 00:23:41.018 "nguid": "55B053402AA94937859C8749798DBD58", 00:23:41.018 "uuid": "55b05340-2aa9-4937-859c-8749798dbd58" 00:23:41.018 } 00:23:41.018 ] 00:23:41.018 } 00:23:41.018 ] 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:41.018 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1723337 00:23:41.019 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:41.019 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:41.019 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:41.019 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:41.019 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:41.019 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:41.019 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:41.019 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:41.019 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:41.019 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:41.019 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.277 Malloc1 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.277 Asynchronous Event Request test 00:23:41.277 Attaching to 10.0.0.2 00:23:41.277 Attached to 10.0.0.2 00:23:41.277 Registering asynchronous event callbacks... 00:23:41.277 Starting namespace attribute notice tests for all controllers... 00:23:41.277 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:41.277 aer_cb - Changed Namespace 00:23:41.277 Cleaning up... 00:23:41.277 [ 00:23:41.277 { 00:23:41.277 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:41.277 "subtype": "Discovery", 00:23:41.277 "listen_addresses": [], 00:23:41.277 "allow_any_host": true, 00:23:41.277 "hosts": [] 00:23:41.277 }, 00:23:41.277 { 00:23:41.277 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.277 "subtype": "NVMe", 00:23:41.277 "listen_addresses": [ 00:23:41.277 { 00:23:41.277 "trtype": "TCP", 00:23:41.277 "adrfam": "IPv4", 00:23:41.277 "traddr": "10.0.0.2", 00:23:41.277 "trsvcid": "4420" 00:23:41.277 } 00:23:41.277 ], 00:23:41.277 "allow_any_host": true, 00:23:41.277 "hosts": [], 00:23:41.277 "serial_number": "SPDK00000000000001", 00:23:41.277 "model_number": "SPDK bdev Controller", 00:23:41.277 "max_namespaces": 2, 00:23:41.277 "min_cntlid": 1, 00:23:41.277 "max_cntlid": 65519, 00:23:41.277 "namespaces": [ 00:23:41.277 { 00:23:41.277 "nsid": 1, 00:23:41.277 "bdev_name": "Malloc0", 00:23:41.277 "name": "Malloc0", 00:23:41.277 "nguid": "55B053402AA94937859C8749798DBD58", 00:23:41.277 "uuid": "55b05340-2aa9-4937-859c-8749798dbd58" 00:23:41.277 }, 00:23:41.277 { 00:23:41.277 "nsid": 2, 00:23:41.277 "bdev_name": "Malloc1", 00:23:41.277 "name": "Malloc1", 00:23:41.277 "nguid": "8F2420E8A0D945D5B5CF19C5208C013C", 00:23:41.277 "uuid": "8f2420e8-a0d9-45d5-b5cf-19c5208c013c" 00:23:41.277 } 00:23:41.277 ] 00:23:41.277 } 00:23:41.277 ] 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1723337 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:41.277 14:25:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:41.277 rmmod nvme_tcp 00:23:41.277 rmmod nvme_fabrics 00:23:41.277 rmmod nvme_keyring 00:23:41.277 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:41.277 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:41.277 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:41.277 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1723280 ']' 00:23:41.277 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1723280 00:23:41.277 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1723280 ']' 00:23:41.277 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1723280 00:23:41.277 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1723280 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1723280' 00:23:41.536 killing process with pid 1723280 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1723280 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1723280 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.536 14:25:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.070 14:25:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:44.070 00:23:44.070 real 0m10.086s 00:23:44.070 user 0m5.425s 00:23:44.070 sys 0m5.444s 00:23:44.070 14:25:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.070 14:25:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.070 ************************************ 00:23:44.070 END TEST nvmf_aer 00:23:44.070 ************************************ 00:23:44.070 14:25:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:44.070 14:25:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:44.070 14:25:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:44.070 14:25:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.070 ************************************ 00:23:44.070 START TEST nvmf_async_init 00:23:44.070 ************************************ 00:23:44.070 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:44.070 * Looking for test storage... 00:23:44.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:44.070 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:44.070 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:23:44.070 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:44.070 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:44.070 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:44.070 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:44.070 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:44.070 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:44.070 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:44.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.071 --rc genhtml_branch_coverage=1 00:23:44.071 --rc genhtml_function_coverage=1 00:23:44.071 --rc genhtml_legend=1 00:23:44.071 --rc geninfo_all_blocks=1 00:23:44.071 --rc geninfo_unexecuted_blocks=1 00:23:44.071 00:23:44.071 ' 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:44.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.071 --rc genhtml_branch_coverage=1 00:23:44.071 --rc genhtml_function_coverage=1 00:23:44.071 --rc genhtml_legend=1 00:23:44.071 --rc geninfo_all_blocks=1 00:23:44.071 --rc geninfo_unexecuted_blocks=1 00:23:44.071 00:23:44.071 ' 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:44.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.071 --rc genhtml_branch_coverage=1 00:23:44.071 --rc genhtml_function_coverage=1 00:23:44.071 --rc genhtml_legend=1 00:23:44.071 --rc geninfo_all_blocks=1 00:23:44.071 --rc geninfo_unexecuted_blocks=1 00:23:44.071 00:23:44.071 ' 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:44.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.071 --rc genhtml_branch_coverage=1 00:23:44.071 --rc genhtml_function_coverage=1 00:23:44.071 --rc genhtml_legend=1 00:23:44.071 --rc geninfo_all_blocks=1 00:23:44.071 --rc geninfo_unexecuted_blocks=1 00:23:44.071 00:23:44.071 ' 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:44.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5c900a95007749bc90522021b9af1bb6 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:44.071 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:44.072 14:25:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:50.640 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:50.640 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.640 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:50.641 Found net devices under 0000:af:00.0: cvl_0_0 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:50.641 Found net devices under 0000:af:00.1: cvl_0_1 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:50.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:23:50.641 00:23:50.641 --- 10.0.0.2 ping statistics --- 00:23:50.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.641 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:23:50.641 00:23:50.641 --- 10.0.0.1 ping statistics --- 00:23:50.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.641 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1727309 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1727309 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1727309 ']' 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.641 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.900 [2024-12-10 14:25:51.399824] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:23:50.900 [2024-12-10 14:25:51.399868] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.900 [2024-12-10 14:25:51.482126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.900 [2024-12-10 14:25:51.521112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.900 [2024-12-10 14:25:51.521147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.900 [2024-12-10 14:25:51.521154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.900 [2024-12-10 14:25:51.521160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.900 [2024-12-10 14:25:51.521164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.900 [2024-12-10 14:25:51.521731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.900 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.900 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:50.900 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:50.900 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:50.900 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.159 [2024-12-10 14:25:51.661171] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.159 null0 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5c900a95007749bc90522021b9af1bb6 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.159 [2024-12-10 14:25:51.713422] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.159 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.418 nvme0n1 00:23:51.418 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.418 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:51.418 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.418 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.418 [ 00:23:51.418 { 00:23:51.418 "name": "nvme0n1", 00:23:51.418 "aliases": [ 00:23:51.418 "5c900a95-0077-49bc-9052-2021b9af1bb6" 00:23:51.418 ], 00:23:51.418 "product_name": "NVMe disk", 00:23:51.418 "block_size": 512, 00:23:51.418 "num_blocks": 2097152, 00:23:51.418 "uuid": "5c900a95-0077-49bc-9052-2021b9af1bb6", 00:23:51.418 "numa_id": 1, 00:23:51.418 "assigned_rate_limits": { 00:23:51.418 "rw_ios_per_sec": 0, 00:23:51.418 "rw_mbytes_per_sec": 0, 00:23:51.418 "r_mbytes_per_sec": 0, 00:23:51.418 "w_mbytes_per_sec": 0 00:23:51.418 }, 00:23:51.418 "claimed": false, 00:23:51.418 "zoned": false, 00:23:51.418 "supported_io_types": { 00:23:51.418 "read": true, 00:23:51.418 "write": true, 00:23:51.418 "unmap": false, 00:23:51.418 "flush": true, 00:23:51.418 "reset": true, 00:23:51.418 "nvme_admin": true, 00:23:51.418 "nvme_io": true, 00:23:51.418 "nvme_io_md": false, 00:23:51.418 "write_zeroes": true, 00:23:51.418 "zcopy": false, 00:23:51.418 "get_zone_info": false, 00:23:51.418 "zone_management": false, 00:23:51.418 "zone_append": false, 00:23:51.418 "compare": true, 00:23:51.418 "compare_and_write": true, 00:23:51.418 "abort": true, 00:23:51.418 "seek_hole": false, 00:23:51.418 "seek_data": false, 00:23:51.418 "copy": true, 00:23:51.418 "nvme_iov_md": false 00:23:51.418 }, 00:23:51.418 "memory_domains": [ 00:23:51.418 { 00:23:51.418 "dma_device_id": "system", 00:23:51.418 "dma_device_type": 1 00:23:51.418 } 00:23:51.418 ], 00:23:51.418 "driver_specific": { 00:23:51.418 "nvme": [ 00:23:51.418 { 00:23:51.418 "trid": { 00:23:51.418 "trtype": "TCP", 00:23:51.418 "adrfam": "IPv4", 00:23:51.418 "traddr": "10.0.0.2", 00:23:51.418 "trsvcid": "4420", 00:23:51.418 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:51.418 }, 00:23:51.418 "ctrlr_data": { 00:23:51.418 "cntlid": 1, 00:23:51.418 "vendor_id": "0x8086", 00:23:51.418 "model_number": "SPDK bdev Controller", 00:23:51.418 "serial_number": "00000000000000000000", 00:23:51.418 "firmware_revision": "25.01", 00:23:51.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.418 "oacs": { 00:23:51.418 "security": 0, 00:23:51.418 "format": 0, 00:23:51.418 "firmware": 0, 00:23:51.418 "ns_manage": 0 00:23:51.418 }, 00:23:51.418 "multi_ctrlr": true, 00:23:51.418 "ana_reporting": false 00:23:51.418 }, 00:23:51.418 "vs": { 00:23:51.418 "nvme_version": "1.3" 00:23:51.418 }, 00:23:51.418 "ns_data": { 00:23:51.418 "id": 1, 00:23:51.418 "can_share": true 00:23:51.418 } 00:23:51.418 } 00:23:51.418 ], 00:23:51.418 "mp_policy": "active_passive" 00:23:51.418 } 00:23:51.418 } 00:23:51.418 ] 00:23:51.418 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.418 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:51.418 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.418 14:25:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.418 [2024-12-10 14:25:51.977967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:51.418 [2024-12-10 14:25:51.978020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x789550 (9): Bad file descriptor 00:23:51.418 [2024-12-10 14:25:52.110299] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:51.418 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.418 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:51.418 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.418 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.418 [ 00:23:51.418 { 00:23:51.418 "name": "nvme0n1", 00:23:51.418 "aliases": [ 00:23:51.418 "5c900a95-0077-49bc-9052-2021b9af1bb6" 00:23:51.418 ], 00:23:51.418 "product_name": "NVMe disk", 00:23:51.418 "block_size": 512, 00:23:51.418 "num_blocks": 2097152, 00:23:51.418 "uuid": "5c900a95-0077-49bc-9052-2021b9af1bb6", 00:23:51.418 "numa_id": 1, 00:23:51.418 "assigned_rate_limits": { 00:23:51.418 "rw_ios_per_sec": 0, 00:23:51.418 "rw_mbytes_per_sec": 0, 00:23:51.418 "r_mbytes_per_sec": 0, 00:23:51.418 "w_mbytes_per_sec": 0 00:23:51.418 }, 00:23:51.418 "claimed": false, 00:23:51.418 "zoned": false, 00:23:51.418 "supported_io_types": { 00:23:51.418 "read": true, 00:23:51.418 "write": true, 00:23:51.418 "unmap": false, 00:23:51.418 "flush": true, 00:23:51.418 "reset": true, 00:23:51.418 "nvme_admin": true, 00:23:51.418 "nvme_io": true, 00:23:51.418 "nvme_io_md": false, 00:23:51.418 "write_zeroes": true, 00:23:51.418 "zcopy": false, 00:23:51.418 "get_zone_info": false, 00:23:51.418 "zone_management": false, 00:23:51.418 "zone_append": false, 00:23:51.418 "compare": true, 00:23:51.418 "compare_and_write": true, 00:23:51.418 "abort": true, 00:23:51.418 "seek_hole": false, 00:23:51.418 "seek_data": false, 00:23:51.418 "copy": true, 00:23:51.418 "nvme_iov_md": false 00:23:51.418 }, 00:23:51.419 "memory_domains": [ 00:23:51.419 { 00:23:51.419 "dma_device_id": "system", 00:23:51.419 "dma_device_type": 1 00:23:51.419 } 00:23:51.419 ], 00:23:51.419 "driver_specific": { 00:23:51.419 "nvme": [ 00:23:51.419 { 00:23:51.419 "trid": { 00:23:51.419 "trtype": "TCP", 00:23:51.419 "adrfam": "IPv4", 00:23:51.419 "traddr": "10.0.0.2", 00:23:51.419 "trsvcid": "4420", 00:23:51.419 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:51.419 }, 00:23:51.419 "ctrlr_data": { 00:23:51.419 "cntlid": 2, 00:23:51.419 "vendor_id": "0x8086", 00:23:51.419 "model_number": "SPDK bdev Controller", 00:23:51.419 "serial_number": "00000000000000000000", 00:23:51.419 "firmware_revision": "25.01", 00:23:51.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.419 "oacs": { 00:23:51.419 "security": 0, 00:23:51.419 "format": 0, 00:23:51.419 "firmware": 0, 00:23:51.419 "ns_manage": 0 00:23:51.419 }, 00:23:51.419 "multi_ctrlr": true, 00:23:51.419 "ana_reporting": false 00:23:51.419 }, 00:23:51.419 "vs": { 00:23:51.419 "nvme_version": "1.3" 00:23:51.419 }, 00:23:51.419 "ns_data": { 00:23:51.419 "id": 1, 00:23:51.419 "can_share": true 00:23:51.419 } 00:23:51.419 } 00:23:51.419 ], 00:23:51.419 "mp_policy": "active_passive" 00:23:51.419 } 00:23:51.419 } 00:23:51.419 ] 00:23:51.419 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.419 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.419 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.419 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.419 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.419 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:51.419 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.eSHzZwzKXK 00:23:51.419 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:51.419 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.eSHzZwzKXK 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.eSHzZwzKXK 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.678 [2024-12-10 14:25:52.182594] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:51.678 [2024-12-10 14:25:52.182682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.678 [2024-12-10 14:25:52.202661] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.678 nvme0n1 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.678 [ 00:23:51.678 { 00:23:51.678 "name": "nvme0n1", 00:23:51.678 "aliases": [ 00:23:51.678 "5c900a95-0077-49bc-9052-2021b9af1bb6" 00:23:51.678 ], 00:23:51.678 "product_name": "NVMe disk", 00:23:51.678 "block_size": 512, 00:23:51.678 "num_blocks": 2097152, 00:23:51.678 "uuid": "5c900a95-0077-49bc-9052-2021b9af1bb6", 00:23:51.678 "numa_id": 1, 00:23:51.678 "assigned_rate_limits": { 00:23:51.678 "rw_ios_per_sec": 0, 00:23:51.678 "rw_mbytes_per_sec": 0, 00:23:51.678 "r_mbytes_per_sec": 0, 00:23:51.678 "w_mbytes_per_sec": 0 00:23:51.678 }, 00:23:51.678 "claimed": false, 00:23:51.678 "zoned": false, 00:23:51.678 "supported_io_types": { 00:23:51.678 "read": true, 00:23:51.678 "write": true, 00:23:51.678 "unmap": false, 00:23:51.678 "flush": true, 00:23:51.678 "reset": true, 00:23:51.678 "nvme_admin": true, 00:23:51.678 "nvme_io": true, 00:23:51.678 "nvme_io_md": false, 00:23:51.678 "write_zeroes": true, 00:23:51.678 "zcopy": false, 00:23:51.678 "get_zone_info": false, 00:23:51.678 "zone_management": false, 00:23:51.678 "zone_append": false, 00:23:51.678 "compare": true, 00:23:51.678 "compare_and_write": true, 00:23:51.678 "abort": true, 00:23:51.678 "seek_hole": false, 00:23:51.678 "seek_data": false, 00:23:51.678 "copy": true, 00:23:51.678 "nvme_iov_md": false 00:23:51.678 }, 00:23:51.678 "memory_domains": [ 00:23:51.678 { 00:23:51.678 "dma_device_id": "system", 00:23:51.678 "dma_device_type": 1 00:23:51.678 } 00:23:51.678 ], 00:23:51.678 "driver_specific": { 00:23:51.678 "nvme": [ 00:23:51.678 { 00:23:51.678 "trid": { 00:23:51.678 "trtype": "TCP", 00:23:51.678 "adrfam": "IPv4", 00:23:51.678 "traddr": "10.0.0.2", 00:23:51.678 "trsvcid": "4421", 00:23:51.678 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:51.678 }, 00:23:51.678 "ctrlr_data": { 00:23:51.678 "cntlid": 3, 00:23:51.678 "vendor_id": "0x8086", 00:23:51.678 "model_number": "SPDK bdev Controller", 00:23:51.678 "serial_number": "00000000000000000000", 00:23:51.678 "firmware_revision": "25.01", 00:23:51.678 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.678 "oacs": { 00:23:51.678 "security": 0, 00:23:51.678 "format": 0, 00:23:51.678 "firmware": 0, 00:23:51.678 "ns_manage": 0 00:23:51.678 }, 00:23:51.678 "multi_ctrlr": true, 00:23:51.678 "ana_reporting": false 00:23:51.678 }, 00:23:51.678 "vs": { 00:23:51.678 "nvme_version": "1.3" 00:23:51.678 }, 00:23:51.678 "ns_data": { 00:23:51.678 "id": 1, 00:23:51.678 "can_share": true 00:23:51.678 } 00:23:51.678 } 00:23:51.678 ], 00:23:51.678 "mp_policy": "active_passive" 00:23:51.678 } 00:23:51.678 } 00:23:51.678 ] 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.eSHzZwzKXK 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:51.678 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:51.679 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:51.679 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:51.679 rmmod nvme_tcp 00:23:51.679 rmmod nvme_fabrics 00:23:51.679 rmmod nvme_keyring 00:23:51.679 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:51.679 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:51.679 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:51.679 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1727309 ']' 00:23:51.679 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1727309 00:23:51.679 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1727309 ']' 00:23:51.679 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1727309 00:23:51.679 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:51.679 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.679 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1727309 00:23:51.938 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:51.938 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:51.938 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1727309' 00:23:51.938 killing process with pid 1727309 00:23:51.938 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1727309 00:23:51.938 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1727309 00:23:51.938 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:51.938 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:51.938 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:51.938 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:51.938 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:51.938 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:51.938 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:51.938 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:51.938 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:51.938 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.938 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.938 14:25:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.474 14:25:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.474 00:23:54.474 real 0m10.265s 00:23:54.474 user 0m3.285s 00:23:54.474 sys 0m5.439s 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.475 ************************************ 00:23:54.475 END TEST nvmf_async_init 00:23:54.475 ************************************ 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.475 ************************************ 00:23:54.475 START TEST dma 00:23:54.475 ************************************ 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:54.475 * Looking for test storage... 00:23:54.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:54.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.475 --rc genhtml_branch_coverage=1 00:23:54.475 --rc genhtml_function_coverage=1 00:23:54.475 --rc genhtml_legend=1 00:23:54.475 --rc geninfo_all_blocks=1 00:23:54.475 --rc geninfo_unexecuted_blocks=1 00:23:54.475 00:23:54.475 ' 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:54.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.475 --rc genhtml_branch_coverage=1 00:23:54.475 --rc genhtml_function_coverage=1 00:23:54.475 --rc genhtml_legend=1 00:23:54.475 --rc geninfo_all_blocks=1 00:23:54.475 --rc geninfo_unexecuted_blocks=1 00:23:54.475 00:23:54.475 ' 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:54.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.475 --rc genhtml_branch_coverage=1 00:23:54.475 --rc genhtml_function_coverage=1 00:23:54.475 --rc genhtml_legend=1 00:23:54.475 --rc geninfo_all_blocks=1 00:23:54.475 --rc geninfo_unexecuted_blocks=1 00:23:54.475 00:23:54.475 ' 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:54.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.475 --rc genhtml_branch_coverage=1 00:23:54.475 --rc genhtml_function_coverage=1 00:23:54.475 --rc genhtml_legend=1 00:23:54.475 --rc geninfo_all_blocks=1 00:23:54.475 --rc geninfo_unexecuted_blocks=1 00:23:54.475 00:23:54.475 ' 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:54.475 14:25:54 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:54.475 00:23:54.475 real 0m0.207s 00:23:54.475 user 0m0.122s 00:23:54.476 sys 0m0.100s 00:23:54.476 14:25:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.476 14:25:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:54.476 ************************************ 00:23:54.476 END TEST dma 00:23:54.476 ************************************ 00:23:54.476 14:25:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:54.476 14:25:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:54.476 14:25:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.476 14:25:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.476 ************************************ 00:23:54.476 START TEST nvmf_identify 00:23:54.476 ************************************ 00:23:54.476 14:25:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:54.476 * Looking for test storage... 00:23:54.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:54.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.476 --rc genhtml_branch_coverage=1 00:23:54.476 --rc genhtml_function_coverage=1 00:23:54.476 --rc genhtml_legend=1 00:23:54.476 --rc geninfo_all_blocks=1 00:23:54.476 --rc geninfo_unexecuted_blocks=1 00:23:54.476 00:23:54.476 ' 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:54.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.476 --rc genhtml_branch_coverage=1 00:23:54.476 --rc genhtml_function_coverage=1 00:23:54.476 --rc genhtml_legend=1 00:23:54.476 --rc geninfo_all_blocks=1 00:23:54.476 --rc geninfo_unexecuted_blocks=1 00:23:54.476 00:23:54.476 ' 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:54.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.476 --rc genhtml_branch_coverage=1 00:23:54.476 --rc genhtml_function_coverage=1 00:23:54.476 --rc genhtml_legend=1 00:23:54.476 --rc geninfo_all_blocks=1 00:23:54.476 --rc geninfo_unexecuted_blocks=1 00:23:54.476 00:23:54.476 ' 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:54.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.476 --rc genhtml_branch_coverage=1 00:23:54.476 --rc genhtml_function_coverage=1 00:23:54.476 --rc genhtml_legend=1 00:23:54.476 --rc geninfo_all_blocks=1 00:23:54.476 --rc geninfo_unexecuted_blocks=1 00:23:54.476 00:23:54.476 ' 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.476 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.477 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.477 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:54.477 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:54.477 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:54.477 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:54.477 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.477 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:54.477 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:54.477 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:54.477 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.477 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.477 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.477 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:54.477 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:54.477 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:54.477 14:25:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.042 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:01.043 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:01.043 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:01.043 Found net devices under 0000:af:00.0: cvl_0_0 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:01.043 Found net devices under 0000:af:00.1: cvl_0_1 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:01.043 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:01.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:24:01.302 00:24:01.302 --- 10.0.0.2 ping statistics --- 00:24:01.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.302 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:24:01.302 00:24:01.302 --- 10.0.0.1 ping statistics --- 00:24:01.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.302 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1731584 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1731584 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1731584 ']' 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.302 14:26:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:01.302 [2024-12-10 14:26:01.955182] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:24:01.302 [2024-12-10 14:26:01.955264] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.561 [2024-12-10 14:26:02.042487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:01.561 [2024-12-10 14:26:02.082545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.561 [2024-12-10 14:26:02.082583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.561 [2024-12-10 14:26:02.082593] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.561 [2024-12-10 14:26:02.082599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.561 [2024-12-10 14:26:02.082604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.561 [2024-12-10 14:26:02.084139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.561 [2024-12-10 14:26:02.084263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.561 [2024-12-10 14:26:02.084312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.561 [2024-12-10 14:26:02.084313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:02.126 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.126 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:02.126 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:02.126 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.126 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.126 [2024-12-10 14:26:02.799359] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.126 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.126 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:02.126 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:02.126 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.126 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:02.126 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.126 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.388 Malloc0 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.388 [2024-12-10 14:26:02.894746] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.388 [ 00:24:02.388 { 00:24:02.388 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:02.388 "subtype": "Discovery", 00:24:02.388 "listen_addresses": [ 00:24:02.388 { 00:24:02.388 "trtype": "TCP", 00:24:02.388 "adrfam": "IPv4", 00:24:02.388 "traddr": "10.0.0.2", 00:24:02.388 "trsvcid": "4420" 00:24:02.388 } 00:24:02.388 ], 00:24:02.388 "allow_any_host": true, 00:24:02.388 "hosts": [] 00:24:02.388 }, 00:24:02.388 { 00:24:02.388 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.388 "subtype": "NVMe", 00:24:02.388 "listen_addresses": [ 00:24:02.388 { 00:24:02.388 "trtype": "TCP", 00:24:02.388 "adrfam": "IPv4", 00:24:02.388 "traddr": "10.0.0.2", 00:24:02.388 "trsvcid": "4420" 00:24:02.388 } 00:24:02.388 ], 00:24:02.388 "allow_any_host": true, 00:24:02.388 "hosts": [], 00:24:02.388 "serial_number": "SPDK00000000000001", 00:24:02.388 "model_number": "SPDK bdev Controller", 00:24:02.388 "max_namespaces": 32, 00:24:02.388 "min_cntlid": 1, 00:24:02.388 "max_cntlid": 65519, 00:24:02.388 "namespaces": [ 00:24:02.388 { 00:24:02.388 "nsid": 1, 00:24:02.388 "bdev_name": "Malloc0", 00:24:02.388 "name": "Malloc0", 00:24:02.388 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:02.388 "eui64": "ABCDEF0123456789", 00:24:02.388 "uuid": "6a8cc414-7256-47b3-b42a-28d34e27548b" 00:24:02.388 } 00:24:02.388 ] 00:24:02.388 } 00:24:02.388 ] 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.388 14:26:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:02.388 [2024-12-10 14:26:02.948512] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:24:02.388 [2024-12-10 14:26:02.948558] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731634 ] 00:24:02.388 [2024-12-10 14:26:02.986793] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:02.388 [2024-12-10 14:26:02.986844] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:02.388 [2024-12-10 14:26:02.986849] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:02.388 [2024-12-10 14:26:02.986867] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:02.388 [2024-12-10 14:26:02.986875] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:02.388 [2024-12-10 14:26:02.994458] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:02.388 [2024-12-10 14:26:02.994491] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15e0690 0 00:24:02.388 [2024-12-10 14:26:03.001231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:02.388 [2024-12-10 14:26:03.001247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:02.388 [2024-12-10 14:26:03.001251] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:02.388 [2024-12-10 14:26:03.001254] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:02.388 [2024-12-10 14:26:03.001290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.388 [2024-12-10 14:26:03.001295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.388 [2024-12-10 14:26:03.001299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e0690) 00:24:02.388 [2024-12-10 14:26:03.001311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:02.388 [2024-12-10 14:26:03.001328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642100, cid 0, qid 0 00:24:02.388 [2024-12-10 14:26:03.008228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.388 [2024-12-10 14:26:03.008237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.388 [2024-12-10 14:26:03.008240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.388 [2024-12-10 14:26:03.008244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642100) on tqpair=0x15e0690 00:24:02.388 [2024-12-10 14:26:03.008258] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:02.388 [2024-12-10 14:26:03.008268] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:02.388 [2024-12-10 14:26:03.008273] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:02.388 [2024-12-10 14:26:03.008285] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.388 [2024-12-10 14:26:03.008288] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.388 [2024-12-10 14:26:03.008292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e0690) 00:24:02.388 [2024-12-10 14:26:03.008298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.388 [2024-12-10 14:26:03.008311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642100, cid 0, qid 0 00:24:02.388 [2024-12-10 14:26:03.008388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.388 [2024-12-10 14:26:03.008394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.388 [2024-12-10 14:26:03.008397] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.388 [2024-12-10 14:26:03.008400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642100) on tqpair=0x15e0690 00:24:02.388 [2024-12-10 14:26:03.008406] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:02.388 [2024-12-10 14:26:03.008412] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:02.388 [2024-12-10 14:26:03.008418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.388 [2024-12-10 14:26:03.008421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.388 [2024-12-10 14:26:03.008424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e0690) 00:24:02.388 [2024-12-10 14:26:03.008430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.388 [2024-12-10 14:26:03.008440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642100, cid 0, qid 0 00:24:02.388 [2024-12-10 14:26:03.008501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.388 [2024-12-10 14:26:03.008507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.388 [2024-12-10 14:26:03.008510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.388 [2024-12-10 14:26:03.008513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642100) on tqpair=0x15e0690 00:24:02.388 [2024-12-10 14:26:03.008518] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:02.388 [2024-12-10 14:26:03.008525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:02.388 [2024-12-10 14:26:03.008531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.388 [2024-12-10 14:26:03.008534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.388 [2024-12-10 14:26:03.008537] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e0690) 00:24:02.388 [2024-12-10 14:26:03.008543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.388 [2024-12-10 14:26:03.008552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642100, cid 0, qid 0 00:24:02.388 [2024-12-10 14:26:03.008617] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.388 [2024-12-10 14:26:03.008623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.388 [2024-12-10 14:26:03.008626] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.388 [2024-12-10 14:26:03.008629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642100) on tqpair=0x15e0690 00:24:02.388 [2024-12-10 14:26:03.008634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:02.388 [2024-12-10 14:26:03.008643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.008647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.008650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e0690) 00:24:02.389 [2024-12-10 14:26:03.008656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.389 [2024-12-10 14:26:03.008665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642100, cid 0, qid 0 00:24:02.389 [2024-12-10 14:26:03.008744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.389 [2024-12-10 14:26:03.008749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.389 [2024-12-10 14:26:03.008752] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.008755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642100) on tqpair=0x15e0690 00:24:02.389 [2024-12-10 14:26:03.008760] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:02.389 [2024-12-10 14:26:03.008764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:02.389 [2024-12-10 14:26:03.008771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:02.389 [2024-12-10 14:26:03.008881] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:02.389 [2024-12-10 14:26:03.008886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:02.389 [2024-12-10 14:26:03.008894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.008897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.008900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e0690) 00:24:02.389 [2024-12-10 14:26:03.008905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.389 [2024-12-10 14:26:03.008915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642100, cid 0, qid 0 00:24:02.389 [2024-12-10 14:26:03.008977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.389 [2024-12-10 14:26:03.008982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.389 [2024-12-10 14:26:03.008985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.008988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642100) on tqpair=0x15e0690 00:24:02.389 [2024-12-10 14:26:03.008992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:02.389 [2024-12-10 14:26:03.009000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e0690) 00:24:02.389 [2024-12-10 14:26:03.009012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.389 [2024-12-10 14:26:03.009020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642100, cid 0, qid 0 00:24:02.389 [2024-12-10 14:26:03.009094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.389 [2024-12-10 14:26:03.009099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.389 [2024-12-10 14:26:03.009102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642100) on tqpair=0x15e0690 00:24:02.389 [2024-12-10 14:26:03.009111] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:02.389 [2024-12-10 14:26:03.009115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:02.389 [2024-12-10 14:26:03.009122] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:02.389 [2024-12-10 14:26:03.009129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:02.389 [2024-12-10 14:26:03.009137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009140] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e0690) 00:24:02.389 [2024-12-10 14:26:03.009146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.389 [2024-12-10 14:26:03.009155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642100, cid 0, qid 0 00:24:02.389 [2024-12-10 14:26:03.009258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.389 [2024-12-10 14:26:03.009265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.389 [2024-12-10 14:26:03.009268] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009271] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15e0690): datao=0, datal=4096, cccid=0 00:24:02.389 [2024-12-10 14:26:03.009275] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1642100) on tqpair(0x15e0690): expected_datao=0, payload_size=4096 00:24:02.389 [2024-12-10 14:26:03.009279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009286] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009290] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.389 [2024-12-10 14:26:03.009304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.389 [2024-12-10 14:26:03.009307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642100) on tqpair=0x15e0690 00:24:02.389 [2024-12-10 14:26:03.009317] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:02.389 [2024-12-10 14:26:03.009325] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:02.389 [2024-12-10 14:26:03.009328] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:02.389 [2024-12-10 14:26:03.009333] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:02.389 [2024-12-10 14:26:03.009337] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:02.389 [2024-12-10 14:26:03.009341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:02.389 [2024-12-10 14:26:03.009349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:02.389 [2024-12-10 14:26:03.009356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e0690) 00:24:02.389 [2024-12-10 14:26:03.009368] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:02.389 [2024-12-10 14:26:03.009380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642100, cid 0, qid 0 00:24:02.389 [2024-12-10 14:26:03.009441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.389 [2024-12-10 14:26:03.009446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.389 [2024-12-10 14:26:03.009449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642100) on tqpair=0x15e0690 00:24:02.389 [2024-12-10 14:26:03.009459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15e0690) 00:24:02.389 [2024-12-10 14:26:03.009470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.389 [2024-12-10 14:26:03.009475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15e0690) 00:24:02.389 [2024-12-10 14:26:03.009486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.389 [2024-12-10 14:26:03.009491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15e0690) 00:24:02.389 [2024-12-10 14:26:03.009502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.389 [2024-12-10 14:26:03.009507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.389 [2024-12-10 14:26:03.009518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.389 [2024-12-10 14:26:03.009522] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:02.389 [2024-12-10 14:26:03.009532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:02.389 [2024-12-10 14:26:03.009538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15e0690) 00:24:02.389 [2024-12-10 14:26:03.009546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.389 [2024-12-10 14:26:03.009556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642100, cid 0, qid 0 00:24:02.389 [2024-12-10 14:26:03.009561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642280, cid 1, qid 0 00:24:02.389 [2024-12-10 14:26:03.009565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642400, cid 2, qid 0 00:24:02.389 [2024-12-10 14:26:03.009569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.389 [2024-12-10 14:26:03.009572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642700, cid 4, qid 0 00:24:02.389 [2024-12-10 14:26:03.009666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.389 [2024-12-10 14:26:03.009672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.389 [2024-12-10 14:26:03.009676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.389 [2024-12-10 14:26:03.009680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642700) on tqpair=0x15e0690 00:24:02.389 [2024-12-10 14:26:03.009685] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:02.389 [2024-12-10 14:26:03.009689] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:02.390 [2024-12-10 14:26:03.009698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.009702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15e0690) 00:24:02.390 [2024-12-10 14:26:03.009707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.390 [2024-12-10 14:26:03.009716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642700, cid 4, qid 0 00:24:02.390 [2024-12-10 14:26:03.009788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.390 [2024-12-10 14:26:03.009794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.390 [2024-12-10 14:26:03.009797] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.009800] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15e0690): datao=0, datal=4096, cccid=4 00:24:02.390 [2024-12-10 14:26:03.009804] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1642700) on tqpair(0x15e0690): expected_datao=0, payload_size=4096 00:24:02.390 [2024-12-10 14:26:03.009807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.009813] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.009816] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.009825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.390 [2024-12-10 14:26:03.009830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.390 [2024-12-10 14:26:03.009833] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.009836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642700) on tqpair=0x15e0690 00:24:02.390 [2024-12-10 14:26:03.009847] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:02.390 [2024-12-10 14:26:03.009868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.009873] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15e0690) 00:24:02.390 [2024-12-10 14:26:03.009878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.390 [2024-12-10 14:26:03.009884] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.009887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.009890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15e0690) 00:24:02.390 [2024-12-10 14:26:03.009895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.390 [2024-12-10 14:26:03.009908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642700, cid 4, qid 0 00:24:02.390 [2024-12-10 14:26:03.009913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642880, cid 5, qid 0 00:24:02.390 [2024-12-10 14:26:03.010014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.390 [2024-12-10 14:26:03.010019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.390 [2024-12-10 14:26:03.010022] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.010025] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15e0690): datao=0, datal=1024, cccid=4 00:24:02.390 [2024-12-10 14:26:03.010029] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1642700) on tqpair(0x15e0690): expected_datao=0, payload_size=1024 00:24:02.390 [2024-12-10 14:26:03.010034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.010039] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.010043] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.010047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.390 [2024-12-10 14:26:03.010052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.390 [2024-12-10 14:26:03.010055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.010058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642880) on tqpair=0x15e0690 00:24:02.390 [2024-12-10 14:26:03.050322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.390 [2024-12-10 14:26:03.050337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.390 [2024-12-10 14:26:03.050340] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.050344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642700) on tqpair=0x15e0690 00:24:02.390 [2024-12-10 14:26:03.050357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.050360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15e0690) 00:24:02.390 [2024-12-10 14:26:03.050368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.390 [2024-12-10 14:26:03.050384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642700, cid 4, qid 0 00:24:02.390 [2024-12-10 14:26:03.050458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.390 [2024-12-10 14:26:03.050464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.390 [2024-12-10 14:26:03.050467] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.050471] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15e0690): datao=0, datal=3072, cccid=4 00:24:02.390 [2024-12-10 14:26:03.050475] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1642700) on tqpair(0x15e0690): expected_datao=0, payload_size=3072 00:24:02.390 [2024-12-10 14:26:03.050478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.050484] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.050488] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.050506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.390 [2024-12-10 14:26:03.050511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.390 [2024-12-10 14:26:03.050514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.050517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642700) on tqpair=0x15e0690 00:24:02.390 [2024-12-10 14:26:03.050524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.050528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15e0690) 00:24:02.390 [2024-12-10 14:26:03.050533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.390 [2024-12-10 14:26:03.050546] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642700, cid 4, qid 0 00:24:02.390 [2024-12-10 14:26:03.050615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.390 [2024-12-10 14:26:03.050620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.390 [2024-12-10 14:26:03.050623] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.050626] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15e0690): datao=0, datal=8, cccid=4 00:24:02.390 [2024-12-10 14:26:03.050630] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1642700) on tqpair(0x15e0690): expected_datao=0, payload_size=8 00:24:02.390 [2024-12-10 14:26:03.050637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.050642] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.050645] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.092376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.390 [2024-12-10 14:26:03.092386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.390 [2024-12-10 14:26:03.092389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.390 [2024-12-10 14:26:03.092393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642700) on tqpair=0x15e0690 00:24:02.390 ===================================================== 00:24:02.390 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:02.390 ===================================================== 00:24:02.390 Controller Capabilities/Features 00:24:02.390 ================================ 00:24:02.390 Vendor ID: 0000 00:24:02.390 Subsystem Vendor ID: 0000 00:24:02.390 Serial Number: .................... 00:24:02.390 Model Number: ........................................ 00:24:02.390 Firmware Version: 25.01 00:24:02.390 Recommended Arb Burst: 0 00:24:02.390 IEEE OUI Identifier: 00 00 00 00:24:02.390 Multi-path I/O 00:24:02.390 May have multiple subsystem ports: No 00:24:02.390 May have multiple controllers: No 00:24:02.390 Associated with SR-IOV VF: No 00:24:02.390 Max Data Transfer Size: 131072 00:24:02.390 Max Number of Namespaces: 0 00:24:02.390 Max Number of I/O Queues: 1024 00:24:02.390 NVMe Specification Version (VS): 1.3 00:24:02.390 NVMe Specification Version (Identify): 1.3 00:24:02.390 Maximum Queue Entries: 128 00:24:02.390 Contiguous Queues Required: Yes 00:24:02.390 Arbitration Mechanisms Supported 00:24:02.390 Weighted Round Robin: Not Supported 00:24:02.390 Vendor Specific: Not Supported 00:24:02.390 Reset Timeout: 15000 ms 00:24:02.390 Doorbell Stride: 4 bytes 00:24:02.390 NVM Subsystem Reset: Not Supported 00:24:02.390 Command Sets Supported 00:24:02.390 NVM Command Set: Supported 00:24:02.390 Boot Partition: Not Supported 00:24:02.390 Memory Page Size Minimum: 4096 bytes 00:24:02.390 Memory Page Size Maximum: 4096 bytes 00:24:02.390 Persistent Memory Region: Not Supported 00:24:02.390 Optional Asynchronous Events Supported 00:24:02.390 Namespace Attribute Notices: Not Supported 00:24:02.390 Firmware Activation Notices: Not Supported 00:24:02.390 ANA Change Notices: Not Supported 00:24:02.390 PLE Aggregate Log Change Notices: Not Supported 00:24:02.390 LBA Status Info Alert Notices: Not Supported 00:24:02.390 EGE Aggregate Log Change Notices: Not Supported 00:24:02.390 Normal NVM Subsystem Shutdown event: Not Supported 00:24:02.390 Zone Descriptor Change Notices: Not Supported 00:24:02.390 Discovery Log Change Notices: Supported 00:24:02.390 Controller Attributes 00:24:02.390 128-bit Host Identifier: Not Supported 00:24:02.390 Non-Operational Permissive Mode: Not Supported 00:24:02.390 NVM Sets: Not Supported 00:24:02.390 Read Recovery Levels: Not Supported 00:24:02.390 Endurance Groups: Not Supported 00:24:02.390 Predictable Latency Mode: Not Supported 00:24:02.390 Traffic Based Keep ALive: Not Supported 00:24:02.390 Namespace Granularity: Not Supported 00:24:02.390 SQ Associations: Not Supported 00:24:02.390 UUID List: Not Supported 00:24:02.390 Multi-Domain Subsystem: Not Supported 00:24:02.390 Fixed Capacity Management: Not Supported 00:24:02.390 Variable Capacity Management: Not Supported 00:24:02.390 Delete Endurance Group: Not Supported 00:24:02.390 Delete NVM Set: Not Supported 00:24:02.391 Extended LBA Formats Supported: Not Supported 00:24:02.391 Flexible Data Placement Supported: Not Supported 00:24:02.391 00:24:02.391 Controller Memory Buffer Support 00:24:02.391 ================================ 00:24:02.391 Supported: No 00:24:02.391 00:24:02.391 Persistent Memory Region Support 00:24:02.391 ================================ 00:24:02.391 Supported: No 00:24:02.391 00:24:02.391 Admin Command Set Attributes 00:24:02.391 ============================ 00:24:02.391 Security Send/Receive: Not Supported 00:24:02.391 Format NVM: Not Supported 00:24:02.391 Firmware Activate/Download: Not Supported 00:24:02.391 Namespace Management: Not Supported 00:24:02.391 Device Self-Test: Not Supported 00:24:02.391 Directives: Not Supported 00:24:02.391 NVMe-MI: Not Supported 00:24:02.391 Virtualization Management: Not Supported 00:24:02.391 Doorbell Buffer Config: Not Supported 00:24:02.391 Get LBA Status Capability: Not Supported 00:24:02.391 Command & Feature Lockdown Capability: Not Supported 00:24:02.391 Abort Command Limit: 1 00:24:02.391 Async Event Request Limit: 4 00:24:02.391 Number of Firmware Slots: N/A 00:24:02.391 Firmware Slot 1 Read-Only: N/A 00:24:02.391 Firmware Activation Without Reset: N/A 00:24:02.391 Multiple Update Detection Support: N/A 00:24:02.391 Firmware Update Granularity: No Information Provided 00:24:02.391 Per-Namespace SMART Log: No 00:24:02.391 Asymmetric Namespace Access Log Page: Not Supported 00:24:02.391 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:02.391 Command Effects Log Page: Not Supported 00:24:02.391 Get Log Page Extended Data: Supported 00:24:02.391 Telemetry Log Pages: Not Supported 00:24:02.391 Persistent Event Log Pages: Not Supported 00:24:02.391 Supported Log Pages Log Page: May Support 00:24:02.391 Commands Supported & Effects Log Page: Not Supported 00:24:02.391 Feature Identifiers & Effects Log Page:May Support 00:24:02.391 NVMe-MI Commands & Effects Log Page: May Support 00:24:02.391 Data Area 4 for Telemetry Log: Not Supported 00:24:02.391 Error Log Page Entries Supported: 128 00:24:02.391 Keep Alive: Not Supported 00:24:02.391 00:24:02.391 NVM Command Set Attributes 00:24:02.391 ========================== 00:24:02.391 Submission Queue Entry Size 00:24:02.391 Max: 1 00:24:02.391 Min: 1 00:24:02.391 Completion Queue Entry Size 00:24:02.391 Max: 1 00:24:02.391 Min: 1 00:24:02.391 Number of Namespaces: 0 00:24:02.391 Compare Command: Not Supported 00:24:02.391 Write Uncorrectable Command: Not Supported 00:24:02.391 Dataset Management Command: Not Supported 00:24:02.391 Write Zeroes Command: Not Supported 00:24:02.391 Set Features Save Field: Not Supported 00:24:02.391 Reservations: Not Supported 00:24:02.391 Timestamp: Not Supported 00:24:02.391 Copy: Not Supported 00:24:02.391 Volatile Write Cache: Not Present 00:24:02.391 Atomic Write Unit (Normal): 1 00:24:02.391 Atomic Write Unit (PFail): 1 00:24:02.391 Atomic Compare & Write Unit: 1 00:24:02.391 Fused Compare & Write: Supported 00:24:02.391 Scatter-Gather List 00:24:02.391 SGL Command Set: Supported 00:24:02.391 SGL Keyed: Supported 00:24:02.391 SGL Bit Bucket Descriptor: Not Supported 00:24:02.391 SGL Metadata Pointer: Not Supported 00:24:02.391 Oversized SGL: Not Supported 00:24:02.391 SGL Metadata Address: Not Supported 00:24:02.391 SGL Offset: Supported 00:24:02.391 Transport SGL Data Block: Not Supported 00:24:02.391 Replay Protected Memory Block: Not Supported 00:24:02.391 00:24:02.391 Firmware Slot Information 00:24:02.391 ========================= 00:24:02.391 Active slot: 0 00:24:02.391 00:24:02.391 00:24:02.391 Error Log 00:24:02.391 ========= 00:24:02.391 00:24:02.391 Active Namespaces 00:24:02.391 ================= 00:24:02.391 Discovery Log Page 00:24:02.391 ================== 00:24:02.391 Generation Counter: 2 00:24:02.391 Number of Records: 2 00:24:02.391 Record Format: 0 00:24:02.391 00:24:02.391 Discovery Log Entry 0 00:24:02.391 ---------------------- 00:24:02.391 Transport Type: 3 (TCP) 00:24:02.391 Address Family: 1 (IPv4) 00:24:02.391 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:02.391 Entry Flags: 00:24:02.391 Duplicate Returned Information: 1 00:24:02.391 Explicit Persistent Connection Support for Discovery: 1 00:24:02.391 Transport Requirements: 00:24:02.391 Secure Channel: Not Required 00:24:02.391 Port ID: 0 (0x0000) 00:24:02.391 Controller ID: 65535 (0xffff) 00:24:02.391 Admin Max SQ Size: 128 00:24:02.391 Transport Service Identifier: 4420 00:24:02.391 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:02.391 Transport Address: 10.0.0.2 00:24:02.391 Discovery Log Entry 1 00:24:02.391 ---------------------- 00:24:02.391 Transport Type: 3 (TCP) 00:24:02.391 Address Family: 1 (IPv4) 00:24:02.391 Subsystem Type: 2 (NVM Subsystem) 00:24:02.391 Entry Flags: 00:24:02.391 Duplicate Returned Information: 0 00:24:02.391 Explicit Persistent Connection Support for Discovery: 0 00:24:02.391 Transport Requirements: 00:24:02.391 Secure Channel: Not Required 00:24:02.391 Port ID: 0 (0x0000) 00:24:02.391 Controller ID: 65535 (0xffff) 00:24:02.391 Admin Max SQ Size: 128 00:24:02.391 Transport Service Identifier: 4420 00:24:02.391 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:02.391 Transport Address: 10.0.0.2 [2024-12-10 14:26:03.092471] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:02.391 [2024-12-10 14:26:03.092482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642100) on tqpair=0x15e0690 00:24:02.391 [2024-12-10 14:26:03.092488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.391 [2024-12-10 14:26:03.092492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642280) on tqpair=0x15e0690 00:24:02.391 [2024-12-10 14:26:03.092496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.391 [2024-12-10 14:26:03.092500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642400) on tqpair=0x15e0690 00:24:02.391 [2024-12-10 14:26:03.092504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.391 [2024-12-10 14:26:03.092508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.391 [2024-12-10 14:26:03.092512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.391 [2024-12-10 14:26:03.092522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.391 [2024-12-10 14:26:03.092526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.391 [2024-12-10 14:26:03.092529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.391 [2024-12-10 14:26:03.092536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.391 [2024-12-10 14:26:03.092549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.391 [2024-12-10 14:26:03.096258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.391 [2024-12-10 14:26:03.096267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.391 [2024-12-10 14:26:03.096271] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.391 [2024-12-10 14:26:03.096274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.391 [2024-12-10 14:26:03.096281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.391 [2024-12-10 14:26:03.096284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.391 [2024-12-10 14:26:03.096287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.391 [2024-12-10 14:26:03.096293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.391 [2024-12-10 14:26:03.096308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.391 [2024-12-10 14:26:03.096482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.391 [2024-12-10 14:26:03.096487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.391 [2024-12-10 14:26:03.096490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.391 [2024-12-10 14:26:03.096493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.391 [2024-12-10 14:26:03.096498] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:02.391 [2024-12-10 14:26:03.096505] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:02.391 [2024-12-10 14:26:03.096512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.391 [2024-12-10 14:26:03.096516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.391 [2024-12-10 14:26:03.096519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.391 [2024-12-10 14:26:03.096525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.391 [2024-12-10 14:26:03.096534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.391 [2024-12-10 14:26:03.096600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.391 [2024-12-10 14:26:03.096605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.391 [2024-12-10 14:26:03.096608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.391 [2024-12-10 14:26:03.096611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.391 [2024-12-10 14:26:03.096620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.391 [2024-12-10 14:26:03.096624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.391 [2024-12-10 14:26:03.096627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.392 [2024-12-10 14:26:03.096633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.392 [2024-12-10 14:26:03.096642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.392 [2024-12-10 14:26:03.096716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.392 [2024-12-10 14:26:03.096722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.392 [2024-12-10 14:26:03.096725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.096728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.392 [2024-12-10 14:26:03.096736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.096739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.096742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.392 [2024-12-10 14:26:03.096748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.392 [2024-12-10 14:26:03.096757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.392 [2024-12-10 14:26:03.096837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.392 [2024-12-10 14:26:03.096842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.392 [2024-12-10 14:26:03.096845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.096848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.392 [2024-12-10 14:26:03.096857] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.096860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.096863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.392 [2024-12-10 14:26:03.096868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.392 [2024-12-10 14:26:03.096878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.392 [2024-12-10 14:26:03.096950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.392 [2024-12-10 14:26:03.096956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.392 [2024-12-10 14:26:03.096959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.096962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.392 [2024-12-10 14:26:03.096972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.096976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.096979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.392 [2024-12-10 14:26:03.096984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.392 [2024-12-10 14:26:03.096993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.392 [2024-12-10 14:26:03.097066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.392 [2024-12-10 14:26:03.097071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.392 [2024-12-10 14:26:03.097074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.392 [2024-12-10 14:26:03.097085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.392 [2024-12-10 14:26:03.097097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.392 [2024-12-10 14:26:03.097106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.392 [2024-12-10 14:26:03.097164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.392 [2024-12-10 14:26:03.097170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.392 [2024-12-10 14:26:03.097173] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.392 [2024-12-10 14:26:03.097183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.392 [2024-12-10 14:26:03.097195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.392 [2024-12-10 14:26:03.097204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.392 [2024-12-10 14:26:03.097272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.392 [2024-12-10 14:26:03.097278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.392 [2024-12-10 14:26:03.097281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.392 [2024-12-10 14:26:03.097293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097296] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.392 [2024-12-10 14:26:03.097304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.392 [2024-12-10 14:26:03.097313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.392 [2024-12-10 14:26:03.097376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.392 [2024-12-10 14:26:03.097381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.392 [2024-12-10 14:26:03.097384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.392 [2024-12-10 14:26:03.097395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.392 [2024-12-10 14:26:03.097409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.392 [2024-12-10 14:26:03.097418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.392 [2024-12-10 14:26:03.097492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.392 [2024-12-10 14:26:03.097498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.392 [2024-12-10 14:26:03.097501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.392 [2024-12-10 14:26:03.097512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.392 [2024-12-10 14:26:03.097524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.392 [2024-12-10 14:26:03.097533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.392 [2024-12-10 14:26:03.097609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.392 [2024-12-10 14:26:03.097615] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.392 [2024-12-10 14:26:03.097618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.392 [2024-12-10 14:26:03.097629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.392 [2024-12-10 14:26:03.097640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.392 [2024-12-10 14:26:03.097649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.392 [2024-12-10 14:26:03.097709] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.392 [2024-12-10 14:26:03.097715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.392 [2024-12-10 14:26:03.097717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.392 [2024-12-10 14:26:03.097729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.392 [2024-12-10 14:26:03.097735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.392 [2024-12-10 14:26:03.097741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.392 [2024-12-10 14:26:03.097750] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.392 [2024-12-10 14:26:03.097811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.393 [2024-12-10 14:26:03.097817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.393 [2024-12-10 14:26:03.097820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.097823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.393 [2024-12-10 14:26:03.097831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.097834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.097839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.393 [2024-12-10 14:26:03.097844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.393 [2024-12-10 14:26:03.097853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.393 [2024-12-10 14:26:03.097910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.393 [2024-12-10 14:26:03.097915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.393 [2024-12-10 14:26:03.097918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.097921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.393 [2024-12-10 14:26:03.097929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.097932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.097936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.393 [2024-12-10 14:26:03.097941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.393 [2024-12-10 14:26:03.097950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.393 [2024-12-10 14:26:03.098028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.393 [2024-12-10 14:26:03.098033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.393 [2024-12-10 14:26:03.098036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.393 [2024-12-10 14:26:03.098047] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.393 [2024-12-10 14:26:03.098059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.393 [2024-12-10 14:26:03.098068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.393 [2024-12-10 14:26:03.098136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.393 [2024-12-10 14:26:03.098141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.393 [2024-12-10 14:26:03.098144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.393 [2024-12-10 14:26:03.098156] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.393 [2024-12-10 14:26:03.098167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.393 [2024-12-10 14:26:03.098176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.393 [2024-12-10 14:26:03.098240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.393 [2024-12-10 14:26:03.098246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.393 [2024-12-10 14:26:03.098249] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.393 [2024-12-10 14:26:03.098260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.393 [2024-12-10 14:26:03.098274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.393 [2024-12-10 14:26:03.098283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.393 [2024-12-10 14:26:03.098356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.393 [2024-12-10 14:26:03.098361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.393 [2024-12-10 14:26:03.098364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.393 [2024-12-10 14:26:03.098375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098378] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098381] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.393 [2024-12-10 14:26:03.098387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.393 [2024-12-10 14:26:03.098396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.393 [2024-12-10 14:26:03.098454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.393 [2024-12-10 14:26:03.098460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.393 [2024-12-10 14:26:03.098463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.393 [2024-12-10 14:26:03.098474] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.393 [2024-12-10 14:26:03.098486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.393 [2024-12-10 14:26:03.098495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.393 [2024-12-10 14:26:03.098553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.393 [2024-12-10 14:26:03.098559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.393 [2024-12-10 14:26:03.098562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.393 [2024-12-10 14:26:03.098573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.393 [2024-12-10 14:26:03.098584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.393 [2024-12-10 14:26:03.098593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.393 [2024-12-10 14:26:03.098655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.393 [2024-12-10 14:26:03.098660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.393 [2024-12-10 14:26:03.098663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.393 [2024-12-10 14:26:03.098674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.393 [2024-12-10 14:26:03.098686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.393 [2024-12-10 14:26:03.098696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.393 [2024-12-10 14:26:03.098756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.393 [2024-12-10 14:26:03.098762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.393 [2024-12-10 14:26:03.098765] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.393 [2024-12-10 14:26:03.098776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.393 [2024-12-10 14:26:03.098788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.393 [2024-12-10 14:26:03.098797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.393 [2024-12-10 14:26:03.098852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.393 [2024-12-10 14:26:03.098858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.393 [2024-12-10 14:26:03.098861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.393 [2024-12-10 14:26:03.098873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.393 [2024-12-10 14:26:03.098885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.393 [2024-12-10 14:26:03.098894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.393 [2024-12-10 14:26:03.098956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.393 [2024-12-10 14:26:03.098961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.393 [2024-12-10 14:26:03.098964] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.393 [2024-12-10 14:26:03.098976] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.393 [2024-12-10 14:26:03.098982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.393 [2024-12-10 14:26:03.098987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.393 [2024-12-10 14:26:03.098997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.393 [2024-12-10 14:26:03.099059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.393 [2024-12-10 14:26:03.099065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.393 [2024-12-10 14:26:03.099067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.394 [2024-12-10 14:26:03.099078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.394 [2024-12-10 14:26:03.099090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.394 [2024-12-10 14:26:03.099101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.394 [2024-12-10 14:26:03.099177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.394 [2024-12-10 14:26:03.099183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.394 [2024-12-10 14:26:03.099185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.394 [2024-12-10 14:26:03.099197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.394 [2024-12-10 14:26:03.099209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.394 [2024-12-10 14:26:03.099222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.394 [2024-12-10 14:26:03.099277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.394 [2024-12-10 14:26:03.099283] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.394 [2024-12-10 14:26:03.099286] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099289] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.394 [2024-12-10 14:26:03.099297] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099304] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.394 [2024-12-10 14:26:03.099309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.394 [2024-12-10 14:26:03.099318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.394 [2024-12-10 14:26:03.099377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.394 [2024-12-10 14:26:03.099382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.394 [2024-12-10 14:26:03.099385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.394 [2024-12-10 14:26:03.099396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.394 [2024-12-10 14:26:03.099408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.394 [2024-12-10 14:26:03.099417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.394 [2024-12-10 14:26:03.099476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.394 [2024-12-10 14:26:03.099482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.394 [2024-12-10 14:26:03.099484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.394 [2024-12-10 14:26:03.099496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.394 [2024-12-10 14:26:03.099507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.394 [2024-12-10 14:26:03.099516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.394 [2024-12-10 14:26:03.099577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.394 [2024-12-10 14:26:03.099583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.394 [2024-12-10 14:26:03.099586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.394 [2024-12-10 14:26:03.099597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.394 [2024-12-10 14:26:03.099609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.394 [2024-12-10 14:26:03.099618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.394 [2024-12-10 14:26:03.099677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.394 [2024-12-10 14:26:03.099683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.394 [2024-12-10 14:26:03.099686] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.394 [2024-12-10 14:26:03.099697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.394 [2024-12-10 14:26:03.099709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.394 [2024-12-10 14:26:03.099717] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.394 [2024-12-10 14:26:03.099776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.394 [2024-12-10 14:26:03.099782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.394 [2024-12-10 14:26:03.099784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.394 [2024-12-10 14:26:03.099795] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099799] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.394 [2024-12-10 14:26:03.099807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.394 [2024-12-10 14:26:03.099816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.394 [2024-12-10 14:26:03.099872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.394 [2024-12-10 14:26:03.099877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.394 [2024-12-10 14:26:03.099880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.394 [2024-12-10 14:26:03.099892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.394 [2024-12-10 14:26:03.099904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.394 [2024-12-10 14:26:03.099913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.394 [2024-12-10 14:26:03.099975] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.394 [2024-12-10 14:26:03.099982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.394 [2024-12-10 14:26:03.099985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.394 [2024-12-10 14:26:03.099996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.099999] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.100002] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.394 [2024-12-10 14:26:03.100007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.394 [2024-12-10 14:26:03.100016] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.394 [2024-12-10 14:26:03.100094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.394 [2024-12-10 14:26:03.100100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.394 [2024-12-10 14:26:03.100102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.100106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.394 [2024-12-10 14:26:03.100113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.100117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.100120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.394 [2024-12-10 14:26:03.100125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.394 [2024-12-10 14:26:03.100134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.394 [2024-12-10 14:26:03.100196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.394 [2024-12-10 14:26:03.100202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.394 [2024-12-10 14:26:03.100205] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.100208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.394 [2024-12-10 14:26:03.104224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.104230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.104233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15e0690) 00:24:02.394 [2024-12-10 14:26:03.104239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.394 [2024-12-10 14:26:03.104250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1642580, cid 3, qid 0 00:24:02.394 [2024-12-10 14:26:03.104382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.394 [2024-12-10 14:26:03.104388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.394 [2024-12-10 14:26:03.104391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.394 [2024-12-10 14:26:03.104394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1642580) on tqpair=0x15e0690 00:24:02.394 [2024-12-10 14:26:03.104401] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:24:02.395 00:24:02.395 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:02.656 [2024-12-10 14:26:03.142182] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:24:02.656 [2024-12-10 14:26:03.142225] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731731 ] 00:24:02.656 [2024-12-10 14:26:03.182444] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:02.656 [2024-12-10 14:26:03.182491] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:02.656 [2024-12-10 14:26:03.182497] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:02.656 [2024-12-10 14:26:03.182510] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:02.656 [2024-12-10 14:26:03.182518] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:02.656 [2024-12-10 14:26:03.186363] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:02.656 [2024-12-10 14:26:03.186389] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd14690 0 00:24:02.656 [2024-12-10 14:26:03.194230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:02.656 [2024-12-10 14:26:03.194243] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:02.656 [2024-12-10 14:26:03.194248] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:02.656 [2024-12-10 14:26:03.194251] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:02.656 [2024-12-10 14:26:03.194276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.194282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.194285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd14690) 00:24:02.656 [2024-12-10 14:26:03.194296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:02.656 [2024-12-10 14:26:03.194313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76100, cid 0, qid 0 00:24:02.656 [2024-12-10 14:26:03.202228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.656 [2024-12-10 14:26:03.202236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.656 [2024-12-10 14:26:03.202240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.202244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76100) on tqpair=0xd14690 00:24:02.656 [2024-12-10 14:26:03.202252] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:02.656 [2024-12-10 14:26:03.202258] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:02.656 [2024-12-10 14:26:03.202263] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:02.656 [2024-12-10 14:26:03.202274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.202278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.202282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd14690) 00:24:02.656 [2024-12-10 14:26:03.202288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.656 [2024-12-10 14:26:03.202300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76100, cid 0, qid 0 00:24:02.656 [2024-12-10 14:26:03.202435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.656 [2024-12-10 14:26:03.202442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.656 [2024-12-10 14:26:03.202445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.202448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76100) on tqpair=0xd14690 00:24:02.656 [2024-12-10 14:26:03.202453] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:02.656 [2024-12-10 14:26:03.202463] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:02.656 [2024-12-10 14:26:03.202469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.202473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.202476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd14690) 00:24:02.656 [2024-12-10 14:26:03.202482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.656 [2024-12-10 14:26:03.202492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76100, cid 0, qid 0 00:24:02.656 [2024-12-10 14:26:03.202554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.656 [2024-12-10 14:26:03.202560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.656 [2024-12-10 14:26:03.202564] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.202567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76100) on tqpair=0xd14690 00:24:02.656 [2024-12-10 14:26:03.202571] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:02.656 [2024-12-10 14:26:03.202579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:02.656 [2024-12-10 14:26:03.202585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.202588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.202592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd14690) 00:24:02.656 [2024-12-10 14:26:03.202597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.656 [2024-12-10 14:26:03.202607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76100, cid 0, qid 0 00:24:02.656 [2024-12-10 14:26:03.202670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.656 [2024-12-10 14:26:03.202676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.656 [2024-12-10 14:26:03.202679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.202682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76100) on tqpair=0xd14690 00:24:02.656 [2024-12-10 14:26:03.202687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:02.656 [2024-12-10 14:26:03.202695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.202699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.202703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd14690) 00:24:02.656 [2024-12-10 14:26:03.202709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.656 [2024-12-10 14:26:03.202718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76100, cid 0, qid 0 00:24:02.656 [2024-12-10 14:26:03.202776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.656 [2024-12-10 14:26:03.202783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.656 [2024-12-10 14:26:03.202786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.202790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76100) on tqpair=0xd14690 00:24:02.656 [2024-12-10 14:26:03.202794] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:02.656 [2024-12-10 14:26:03.202799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:02.656 [2024-12-10 14:26:03.202806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:02.656 [2024-12-10 14:26:03.202915] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:02.656 [2024-12-10 14:26:03.202920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:02.656 [2024-12-10 14:26:03.202927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.202930] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.202934] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd14690) 00:24:02.656 [2024-12-10 14:26:03.202939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.656 [2024-12-10 14:26:03.202949] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76100, cid 0, qid 0 00:24:02.656 [2024-12-10 14:26:03.203008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.656 [2024-12-10 14:26:03.203015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.656 [2024-12-10 14:26:03.203019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.203022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76100) on tqpair=0xd14690 00:24:02.656 [2024-12-10 14:26:03.203026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:02.656 [2024-12-10 14:26:03.203035] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.203038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.203042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd14690) 00:24:02.656 [2024-12-10 14:26:03.203048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.656 [2024-12-10 14:26:03.203058] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76100, cid 0, qid 0 00:24:02.656 [2024-12-10 14:26:03.203116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.656 [2024-12-10 14:26:03.203121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.656 [2024-12-10 14:26:03.203125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.656 [2024-12-10 14:26:03.203128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76100) on tqpair=0xd14690 00:24:02.656 [2024-12-10 14:26:03.203134] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:02.656 [2024-12-10 14:26:03.203138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:02.656 [2024-12-10 14:26:03.203146] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:02.656 [2024-12-10 14:26:03.203154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:02.656 [2024-12-10 14:26:03.203162] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd14690) 00:24:02.657 [2024-12-10 14:26:03.203171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.657 [2024-12-10 14:26:03.203181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76100, cid 0, qid 0 00:24:02.657 [2024-12-10 14:26:03.203283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.657 [2024-12-10 14:26:03.203290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.657 [2024-12-10 14:26:03.203293] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203297] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd14690): datao=0, datal=4096, cccid=0 00:24:02.657 [2024-12-10 14:26:03.203306] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd76100) on tqpair(0xd14690): expected_datao=0, payload_size=4096 00:24:02.657 [2024-12-10 14:26:03.203310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203317] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203321] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.657 [2024-12-10 14:26:03.203349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.657 [2024-12-10 14:26:03.203352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76100) on tqpair=0xd14690 00:24:02.657 [2024-12-10 14:26:03.203363] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:02.657 [2024-12-10 14:26:03.203369] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:02.657 [2024-12-10 14:26:03.203374] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:02.657 [2024-12-10 14:26:03.203378] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:02.657 [2024-12-10 14:26:03.203382] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:02.657 [2024-12-10 14:26:03.203386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:02.657 [2024-12-10 14:26:03.203395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:02.657 [2024-12-10 14:26:03.203401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd14690) 00:24:02.657 [2024-12-10 14:26:03.203413] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:02.657 [2024-12-10 14:26:03.203424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76100, cid 0, qid 0 00:24:02.657 [2024-12-10 14:26:03.203490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.657 [2024-12-10 14:26:03.203495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.657 [2024-12-10 14:26:03.203499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203502] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76100) on tqpair=0xd14690 00:24:02.657 [2024-12-10 14:26:03.203508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd14690) 00:24:02.657 [2024-12-10 14:26:03.203520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.657 [2024-12-10 14:26:03.203526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd14690) 00:24:02.657 [2024-12-10 14:26:03.203540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.657 [2024-12-10 14:26:03.203547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd14690) 00:24:02.657 [2024-12-10 14:26:03.203563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.657 [2024-12-10 14:26:03.203568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd14690) 00:24:02.657 [2024-12-10 14:26:03.203579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.657 [2024-12-10 14:26:03.203584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:02.657 [2024-12-10 14:26:03.203594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:02.657 [2024-12-10 14:26:03.203600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd14690) 00:24:02.657 [2024-12-10 14:26:03.203610] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.657 [2024-12-10 14:26:03.203622] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76100, cid 0, qid 0 00:24:02.657 [2024-12-10 14:26:03.203627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76280, cid 1, qid 0 00:24:02.657 [2024-12-10 14:26:03.203631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76400, cid 2, qid 0 00:24:02.657 [2024-12-10 14:26:03.203636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76580, cid 3, qid 0 00:24:02.657 [2024-12-10 14:26:03.203640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76700, cid 4, qid 0 00:24:02.657 [2024-12-10 14:26:03.203729] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.657 [2024-12-10 14:26:03.203735] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.657 [2024-12-10 14:26:03.203738] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76700) on tqpair=0xd14690 00:24:02.657 [2024-12-10 14:26:03.203745] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:02.657 [2024-12-10 14:26:03.203749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:02.657 [2024-12-10 14:26:03.203757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:02.657 [2024-12-10 14:26:03.203763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:02.657 [2024-12-10 14:26:03.203768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd14690) 00:24:02.657 [2024-12-10 14:26:03.203780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:02.657 [2024-12-10 14:26:03.203789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76700, cid 4, qid 0 00:24:02.657 [2024-12-10 14:26:03.203853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.657 [2024-12-10 14:26:03.203858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.657 [2024-12-10 14:26:03.203861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203866] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76700) on tqpair=0xd14690 00:24:02.657 [2024-12-10 14:26:03.203917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:02.657 [2024-12-10 14:26:03.203927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:02.657 [2024-12-10 14:26:03.203933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.203936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd14690) 00:24:02.657 [2024-12-10 14:26:03.203942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.657 [2024-12-10 14:26:03.203952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76700, cid 4, qid 0 00:24:02.657 [2024-12-10 14:26:03.204028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.657 [2024-12-10 14:26:03.204034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.657 [2024-12-10 14:26:03.204037] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.204040] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd14690): datao=0, datal=4096, cccid=4 00:24:02.657 [2024-12-10 14:26:03.204044] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd76700) on tqpair(0xd14690): expected_datao=0, payload_size=4096 00:24:02.657 [2024-12-10 14:26:03.204048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.204054] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.204057] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.204070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.657 [2024-12-10 14:26:03.204075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.657 [2024-12-10 14:26:03.204078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.204081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76700) on tqpair=0xd14690 00:24:02.657 [2024-12-10 14:26:03.204089] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:02.657 [2024-12-10 14:26:03.204101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:02.657 [2024-12-10 14:26:03.204111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:02.657 [2024-12-10 14:26:03.204117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.657 [2024-12-10 14:26:03.204120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd14690) 00:24:02.657 [2024-12-10 14:26:03.204125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.657 [2024-12-10 14:26:03.204136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76700, cid 4, qid 0 00:24:02.657 [2024-12-10 14:26:03.204232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.657 [2024-12-10 14:26:03.204238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.658 [2024-12-10 14:26:03.204241] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204244] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd14690): datao=0, datal=4096, cccid=4 00:24:02.658 [2024-12-10 14:26:03.204248] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd76700) on tqpair(0xd14690): expected_datao=0, payload_size=4096 00:24:02.658 [2024-12-10 14:26:03.204252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204257] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204261] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.658 [2024-12-10 14:26:03.204275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.658 [2024-12-10 14:26:03.204278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76700) on tqpair=0xd14690 00:24:02.658 [2024-12-10 14:26:03.204292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:02.658 [2024-12-10 14:26:03.204301] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:02.658 [2024-12-10 14:26:03.204308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd14690) 00:24:02.658 [2024-12-10 14:26:03.204316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.658 [2024-12-10 14:26:03.204327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76700, cid 4, qid 0 00:24:02.658 [2024-12-10 14:26:03.204399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.658 [2024-12-10 14:26:03.204405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.658 [2024-12-10 14:26:03.204408] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204410] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd14690): datao=0, datal=4096, cccid=4 00:24:02.658 [2024-12-10 14:26:03.204414] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd76700) on tqpair(0xd14690): expected_datao=0, payload_size=4096 00:24:02.658 [2024-12-10 14:26:03.204418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204428] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204432] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.658 [2024-12-10 14:26:03.204462] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.658 [2024-12-10 14:26:03.204465] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76700) on tqpair=0xd14690 00:24:02.658 [2024-12-10 14:26:03.204475] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:02.658 [2024-12-10 14:26:03.204482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:02.658 [2024-12-10 14:26:03.204489] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:02.658 [2024-12-10 14:26:03.204496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:02.658 [2024-12-10 14:26:03.204501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:02.658 [2024-12-10 14:26:03.204506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:02.658 [2024-12-10 14:26:03.204510] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:02.658 [2024-12-10 14:26:03.204514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:02.658 [2024-12-10 14:26:03.204518] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:02.658 [2024-12-10 14:26:03.204533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204536] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd14690) 00:24:02.658 [2024-12-10 14:26:03.204542] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.658 [2024-12-10 14:26:03.204548] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd14690) 00:24:02.658 [2024-12-10 14:26:03.204559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.658 [2024-12-10 14:26:03.204571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76700, cid 4, qid 0 00:24:02.658 [2024-12-10 14:26:03.204575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76880, cid 5, qid 0 00:24:02.658 [2024-12-10 14:26:03.204660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.658 [2024-12-10 14:26:03.204665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.658 [2024-12-10 14:26:03.204668] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76700) on tqpair=0xd14690 00:24:02.658 [2024-12-10 14:26:03.204676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.658 [2024-12-10 14:26:03.204681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.658 [2024-12-10 14:26:03.204685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76880) on tqpair=0xd14690 00:24:02.658 [2024-12-10 14:26:03.204697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd14690) 00:24:02.658 [2024-12-10 14:26:03.204705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.658 [2024-12-10 14:26:03.204715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76880, cid 5, qid 0 00:24:02.658 [2024-12-10 14:26:03.204781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.658 [2024-12-10 14:26:03.204787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.658 [2024-12-10 14:26:03.204790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76880) on tqpair=0xd14690 00:24:02.658 [2024-12-10 14:26:03.204800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd14690) 00:24:02.658 [2024-12-10 14:26:03.204809] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.658 [2024-12-10 14:26:03.204818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76880, cid 5, qid 0 00:24:02.658 [2024-12-10 14:26:03.204881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.658 [2024-12-10 14:26:03.204886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.658 [2024-12-10 14:26:03.204889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76880) on tqpair=0xd14690 00:24:02.658 [2024-12-10 14:26:03.204900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd14690) 00:24:02.658 [2024-12-10 14:26:03.204909] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.658 [2024-12-10 14:26:03.204920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76880, cid 5, qid 0 00:24:02.658 [2024-12-10 14:26:03.204977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.658 [2024-12-10 14:26:03.204983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.658 [2024-12-10 14:26:03.204986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.204989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76880) on tqpair=0xd14690 00:24:02.658 [2024-12-10 14:26:03.205002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.205006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd14690) 00:24:02.658 [2024-12-10 14:26:03.205012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.658 [2024-12-10 14:26:03.205017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.205021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd14690) 00:24:02.658 [2024-12-10 14:26:03.205026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.658 [2024-12-10 14:26:03.205032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.205035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xd14690) 00:24:02.658 [2024-12-10 14:26:03.205040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.658 [2024-12-10 14:26:03.205046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.205049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd14690) 00:24:02.658 [2024-12-10 14:26:03.205054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.658 [2024-12-10 14:26:03.205064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76880, cid 5, qid 0 00:24:02.658 [2024-12-10 14:26:03.205069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76700, cid 4, qid 0 00:24:02.658 [2024-12-10 14:26:03.205073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76a00, cid 6, qid 0 00:24:02.658 [2024-12-10 14:26:03.205077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76b80, cid 7, qid 0 00:24:02.658 [2024-12-10 14:26:03.205227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.658 [2024-12-10 14:26:03.205233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.658 [2024-12-10 14:26:03.205236] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.205238] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd14690): datao=0, datal=8192, cccid=5 00:24:02.658 [2024-12-10 14:26:03.205242] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd76880) on tqpair(0xd14690): expected_datao=0, payload_size=8192 00:24:02.658 [2024-12-10 14:26:03.205246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.205257] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.205260] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.658 [2024-12-10 14:26:03.205268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.659 [2024-12-10 14:26:03.205273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.659 [2024-12-10 14:26:03.205276] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.659 [2024-12-10 14:26:03.205279] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd14690): datao=0, datal=512, cccid=4 00:24:02.659 [2024-12-10 14:26:03.205283] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd76700) on tqpair(0xd14690): expected_datao=0, payload_size=512 00:24:02.659 [2024-12-10 14:26:03.205288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.659 [2024-12-10 14:26:03.205294] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.659 [2024-12-10 14:26:03.205297] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.659 [2024-12-10 14:26:03.205301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.659 [2024-12-10 14:26:03.205306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.659 [2024-12-10 14:26:03.205309] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.659 [2024-12-10 14:26:03.205312] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd14690): datao=0, datal=512, cccid=6 00:24:02.659 [2024-12-10 14:26:03.205315] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd76a00) on tqpair(0xd14690): expected_datao=0, payload_size=512 00:24:02.659 [2024-12-10 14:26:03.205319] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.659 [2024-12-10 14:26:03.205324] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.659 [2024-12-10 14:26:03.205327] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.659 [2024-12-10 14:26:03.205332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.659 [2024-12-10 14:26:03.205337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.659 [2024-12-10 14:26:03.205339] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.659 [2024-12-10 14:26:03.205342] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd14690): datao=0, datal=4096, cccid=7 00:24:02.659 [2024-12-10 14:26:03.205346] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd76b80) on tqpair(0xd14690): expected_datao=0, payload_size=4096 00:24:02.659 [2024-12-10 14:26:03.205350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.659 [2024-12-10 14:26:03.205355] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.659 [2024-12-10 14:26:03.205358] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.659 [2024-12-10 14:26:03.205365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.659 [2024-12-10 14:26:03.205370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.659 [2024-12-10 14:26:03.205373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.659 [2024-12-10 14:26:03.205377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76880) on tqpair=0xd14690 00:24:02.659 [2024-12-10 14:26:03.205386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.659 [2024-12-10 14:26:03.205391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.659 [2024-12-10 14:26:03.205394] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.659 [2024-12-10 14:26:03.205397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76700) on tqpair=0xd14690 00:24:02.659 [2024-12-10 14:26:03.205405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.659 [2024-12-10 14:26:03.205410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.659 [2024-12-10 14:26:03.205413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.659 [2024-12-10 14:26:03.205416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76a00) on tqpair=0xd14690 00:24:02.659 [2024-12-10 14:26:03.205422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.659 [2024-12-10 14:26:03.205427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.659 [2024-12-10 14:26:03.205429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.659 [2024-12-10 14:26:03.205433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76b80) on tqpair=0xd14690 00:24:02.659 ===================================================== 00:24:02.659 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:02.659 ===================================================== 00:24:02.659 Controller Capabilities/Features 00:24:02.659 ================================ 00:24:02.659 Vendor ID: 8086 00:24:02.659 Subsystem Vendor ID: 8086 00:24:02.659 Serial Number: SPDK00000000000001 00:24:02.659 Model Number: SPDK bdev Controller 00:24:02.659 Firmware Version: 25.01 00:24:02.659 Recommended Arb Burst: 6 00:24:02.659 IEEE OUI Identifier: e4 d2 5c 00:24:02.659 Multi-path I/O 00:24:02.659 May have multiple subsystem ports: Yes 00:24:02.659 May have multiple controllers: Yes 00:24:02.659 Associated with SR-IOV VF: No 00:24:02.659 Max Data Transfer Size: 131072 00:24:02.659 Max Number of Namespaces: 32 00:24:02.659 Max Number of I/O Queues: 127 00:24:02.659 NVMe Specification Version (VS): 1.3 00:24:02.659 NVMe Specification Version (Identify): 1.3 00:24:02.659 Maximum Queue Entries: 128 00:24:02.659 Contiguous Queues Required: Yes 00:24:02.659 Arbitration Mechanisms Supported 00:24:02.659 Weighted Round Robin: Not Supported 00:24:02.659 Vendor Specific: Not Supported 00:24:02.659 Reset Timeout: 15000 ms 00:24:02.659 Doorbell Stride: 4 bytes 00:24:02.659 NVM Subsystem Reset: Not Supported 00:24:02.659 Command Sets Supported 00:24:02.659 NVM Command Set: Supported 00:24:02.659 Boot Partition: Not Supported 00:24:02.659 Memory Page Size Minimum: 4096 bytes 00:24:02.659 Memory Page Size Maximum: 4096 bytes 00:24:02.659 Persistent Memory Region: Not Supported 00:24:02.659 Optional Asynchronous Events Supported 00:24:02.659 Namespace Attribute Notices: Supported 00:24:02.659 Firmware Activation Notices: Not Supported 00:24:02.659 ANA Change Notices: Not Supported 00:24:02.659 PLE Aggregate Log Change Notices: Not Supported 00:24:02.659 LBA Status Info Alert Notices: Not Supported 00:24:02.659 EGE Aggregate Log Change Notices: Not Supported 00:24:02.659 Normal NVM Subsystem Shutdown event: Not Supported 00:24:02.659 Zone Descriptor Change Notices: Not Supported 00:24:02.659 Discovery Log Change Notices: Not Supported 00:24:02.659 Controller Attributes 00:24:02.659 128-bit Host Identifier: Supported 00:24:02.659 Non-Operational Permissive Mode: Not Supported 00:24:02.659 NVM Sets: Not Supported 00:24:02.659 Read Recovery Levels: Not Supported 00:24:02.659 Endurance Groups: Not Supported 00:24:02.659 Predictable Latency Mode: Not Supported 00:24:02.659 Traffic Based Keep ALive: Not Supported 00:24:02.659 Namespace Granularity: Not Supported 00:24:02.659 SQ Associations: Not Supported 00:24:02.659 UUID List: Not Supported 00:24:02.659 Multi-Domain Subsystem: Not Supported 00:24:02.659 Fixed Capacity Management: Not Supported 00:24:02.659 Variable Capacity Management: Not Supported 00:24:02.659 Delete Endurance Group: Not Supported 00:24:02.659 Delete NVM Set: Not Supported 00:24:02.659 Extended LBA Formats Supported: Not Supported 00:24:02.659 Flexible Data Placement Supported: Not Supported 00:24:02.659 00:24:02.659 Controller Memory Buffer Support 00:24:02.659 ================================ 00:24:02.659 Supported: No 00:24:02.659 00:24:02.659 Persistent Memory Region Support 00:24:02.659 ================================ 00:24:02.659 Supported: No 00:24:02.659 00:24:02.659 Admin Command Set Attributes 00:24:02.659 ============================ 00:24:02.659 Security Send/Receive: Not Supported 00:24:02.659 Format NVM: Not Supported 00:24:02.659 Firmware Activate/Download: Not Supported 00:24:02.659 Namespace Management: Not Supported 00:24:02.659 Device Self-Test: Not Supported 00:24:02.659 Directives: Not Supported 00:24:02.659 NVMe-MI: Not Supported 00:24:02.659 Virtualization Management: Not Supported 00:24:02.659 Doorbell Buffer Config: Not Supported 00:24:02.659 Get LBA Status Capability: Not Supported 00:24:02.659 Command & Feature Lockdown Capability: Not Supported 00:24:02.659 Abort Command Limit: 4 00:24:02.659 Async Event Request Limit: 4 00:24:02.659 Number of Firmware Slots: N/A 00:24:02.659 Firmware Slot 1 Read-Only: N/A 00:24:02.659 Firmware Activation Without Reset: N/A 00:24:02.659 Multiple Update Detection Support: N/A 00:24:02.659 Firmware Update Granularity: No Information Provided 00:24:02.659 Per-Namespace SMART Log: No 00:24:02.659 Asymmetric Namespace Access Log Page: Not Supported 00:24:02.659 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:02.659 Command Effects Log Page: Supported 00:24:02.659 Get Log Page Extended Data: Supported 00:24:02.659 Telemetry Log Pages: Not Supported 00:24:02.659 Persistent Event Log Pages: Not Supported 00:24:02.659 Supported Log Pages Log Page: May Support 00:24:02.659 Commands Supported & Effects Log Page: Not Supported 00:24:02.659 Feature Identifiers & Effects Log Page:May Support 00:24:02.659 NVMe-MI Commands & Effects Log Page: May Support 00:24:02.659 Data Area 4 for Telemetry Log: Not Supported 00:24:02.659 Error Log Page Entries Supported: 128 00:24:02.659 Keep Alive: Supported 00:24:02.659 Keep Alive Granularity: 10000 ms 00:24:02.659 00:24:02.659 NVM Command Set Attributes 00:24:02.659 ========================== 00:24:02.659 Submission Queue Entry Size 00:24:02.659 Max: 64 00:24:02.659 Min: 64 00:24:02.659 Completion Queue Entry Size 00:24:02.659 Max: 16 00:24:02.659 Min: 16 00:24:02.659 Number of Namespaces: 32 00:24:02.659 Compare Command: Supported 00:24:02.659 Write Uncorrectable Command: Not Supported 00:24:02.659 Dataset Management Command: Supported 00:24:02.659 Write Zeroes Command: Supported 00:24:02.659 Set Features Save Field: Not Supported 00:24:02.659 Reservations: Supported 00:24:02.659 Timestamp: Not Supported 00:24:02.659 Copy: Supported 00:24:02.659 Volatile Write Cache: Present 00:24:02.659 Atomic Write Unit (Normal): 1 00:24:02.659 Atomic Write Unit (PFail): 1 00:24:02.659 Atomic Compare & Write Unit: 1 00:24:02.659 Fused Compare & Write: Supported 00:24:02.659 Scatter-Gather List 00:24:02.659 SGL Command Set: Supported 00:24:02.659 SGL Keyed: Supported 00:24:02.659 SGL Bit Bucket Descriptor: Not Supported 00:24:02.659 SGL Metadata Pointer: Not Supported 00:24:02.659 Oversized SGL: Not Supported 00:24:02.660 SGL Metadata Address: Not Supported 00:24:02.660 SGL Offset: Supported 00:24:02.660 Transport SGL Data Block: Not Supported 00:24:02.660 Replay Protected Memory Block: Not Supported 00:24:02.660 00:24:02.660 Firmware Slot Information 00:24:02.660 ========================= 00:24:02.660 Active slot: 1 00:24:02.660 Slot 1 Firmware Revision: 25.01 00:24:02.660 00:24:02.660 00:24:02.660 Commands Supported and Effects 00:24:02.660 ============================== 00:24:02.660 Admin Commands 00:24:02.660 -------------- 00:24:02.660 Get Log Page (02h): Supported 00:24:02.660 Identify (06h): Supported 00:24:02.660 Abort (08h): Supported 00:24:02.660 Set Features (09h): Supported 00:24:02.660 Get Features (0Ah): Supported 00:24:02.660 Asynchronous Event Request (0Ch): Supported 00:24:02.660 Keep Alive (18h): Supported 00:24:02.660 I/O Commands 00:24:02.660 ------------ 00:24:02.660 Flush (00h): Supported LBA-Change 00:24:02.660 Write (01h): Supported LBA-Change 00:24:02.660 Read (02h): Supported 00:24:02.660 Compare (05h): Supported 00:24:02.660 Write Zeroes (08h): Supported LBA-Change 00:24:02.660 Dataset Management (09h): Supported LBA-Change 00:24:02.660 Copy (19h): Supported LBA-Change 00:24:02.660 00:24:02.660 Error Log 00:24:02.660 ========= 00:24:02.660 00:24:02.660 Arbitration 00:24:02.660 =========== 00:24:02.660 Arbitration Burst: 1 00:24:02.660 00:24:02.660 Power Management 00:24:02.660 ================ 00:24:02.660 Number of Power States: 1 00:24:02.660 Current Power State: Power State #0 00:24:02.660 Power State #0: 00:24:02.660 Max Power: 0.00 W 00:24:02.660 Non-Operational State: Operational 00:24:02.660 Entry Latency: Not Reported 00:24:02.660 Exit Latency: Not Reported 00:24:02.660 Relative Read Throughput: 0 00:24:02.660 Relative Read Latency: 0 00:24:02.660 Relative Write Throughput: 0 00:24:02.660 Relative Write Latency: 0 00:24:02.660 Idle Power: Not Reported 00:24:02.660 Active Power: Not Reported 00:24:02.660 Non-Operational Permissive Mode: Not Supported 00:24:02.660 00:24:02.660 Health Information 00:24:02.660 ================== 00:24:02.660 Critical Warnings: 00:24:02.660 Available Spare Space: OK 00:24:02.660 Temperature: OK 00:24:02.660 Device Reliability: OK 00:24:02.660 Read Only: No 00:24:02.660 Volatile Memory Backup: OK 00:24:02.660 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:02.660 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:02.660 Available Spare: 0% 00:24:02.660 Available Spare Threshold: 0% 00:24:02.660 Life Percentage Used:[2024-12-10 14:26:03.205508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.660 [2024-12-10 14:26:03.205513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd14690) 00:24:02.660 [2024-12-10 14:26:03.205518] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.660 [2024-12-10 14:26:03.205530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76b80, cid 7, qid 0 00:24:02.660 [2024-12-10 14:26:03.205603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.660 [2024-12-10 14:26:03.205609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.660 [2024-12-10 14:26:03.205612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.660 [2024-12-10 14:26:03.205615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76b80) on tqpair=0xd14690 00:24:02.660 [2024-12-10 14:26:03.205644] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:02.660 [2024-12-10 14:26:03.205653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76100) on tqpair=0xd14690 00:24:02.660 [2024-12-10 14:26:03.205658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.660 [2024-12-10 14:26:03.205662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76280) on tqpair=0xd14690 00:24:02.660 [2024-12-10 14:26:03.205666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.660 [2024-12-10 14:26:03.205670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76400) on tqpair=0xd14690 00:24:02.660 [2024-12-10 14:26:03.205674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.660 [2024-12-10 14:26:03.205678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76580) on tqpair=0xd14690 00:24:02.660 [2024-12-10 14:26:03.205682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.660 [2024-12-10 14:26:03.205688] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.660 [2024-12-10 14:26:03.205692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.660 [2024-12-10 14:26:03.205695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd14690) 00:24:02.660 [2024-12-10 14:26:03.205701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.660 [2024-12-10 14:26:03.205712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76580, cid 3, qid 0 00:24:02.660 [2024-12-10 14:26:03.205775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.660 [2024-12-10 14:26:03.205781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.660 [2024-12-10 14:26:03.205784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.660 [2024-12-10 14:26:03.205787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76580) on tqpair=0xd14690 00:24:02.660 [2024-12-10 14:26:03.205792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.660 [2024-12-10 14:26:03.205795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.660 [2024-12-10 14:26:03.205799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd14690) 00:24:02.660 [2024-12-10 14:26:03.205804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.660 [2024-12-10 14:26:03.205816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76580, cid 3, qid 0 00:24:02.660 [2024-12-10 14:26:03.205884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.660 [2024-12-10 14:26:03.205889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.660 [2024-12-10 14:26:03.205892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.660 [2024-12-10 14:26:03.205895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76580) on tqpair=0xd14690 00:24:02.660 [2024-12-10 14:26:03.205899] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:02.660 [2024-12-10 14:26:03.205903] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:02.660 [2024-12-10 14:26:03.205913] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.660 [2024-12-10 14:26:03.205917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.660 [2024-12-10 14:26:03.205920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd14690) 00:24:02.660 [2024-12-10 14:26:03.205925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.660 [2024-12-10 14:26:03.205934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76580, cid 3, qid 0 00:24:02.660 [2024-12-10 14:26:03.205993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.660 [2024-12-10 14:26:03.205999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.660 [2024-12-10 14:26:03.206001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.660 [2024-12-10 14:26:03.206005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76580) on tqpair=0xd14690 00:24:02.660 [2024-12-10 14:26:03.206013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.660 [2024-12-10 14:26:03.206016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.660 [2024-12-10 14:26:03.206019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd14690) 00:24:02.660 [2024-12-10 14:26:03.206025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.660 [2024-12-10 14:26:03.206034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76580, cid 3, qid 0 00:24:02.660 [2024-12-10 14:26:03.206091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.660 [2024-12-10 14:26:03.206096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.660 [2024-12-10 14:26:03.206099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.660 [2024-12-10 14:26:03.206103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76580) on tqpair=0xd14690 00:24:02.660 [2024-12-10 14:26:03.206110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.660 [2024-12-10 14:26:03.206114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.660 [2024-12-10 14:26:03.206117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd14690) 00:24:02.660 [2024-12-10 14:26:03.206122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.660 [2024-12-10 14:26:03.206132] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76580, cid 3, qid 0 00:24:02.661 [2024-12-10 14:26:03.206190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.661 [2024-12-10 14:26:03.206196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.661 [2024-12-10 14:26:03.206199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.661 [2024-12-10 14:26:03.206202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76580) on tqpair=0xd14690 00:24:02.661 [2024-12-10 14:26:03.206210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.661 [2024-12-10 14:26:03.206213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.661 [2024-12-10 14:26:03.210224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd14690) 00:24:02.661 [2024-12-10 14:26:03.210233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.661 [2024-12-10 14:26:03.210243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd76580, cid 3, qid 0 00:24:02.661 [2024-12-10 14:26:03.210394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.661 [2024-12-10 14:26:03.210400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.661 [2024-12-10 14:26:03.210403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.661 [2024-12-10 14:26:03.210407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd76580) on tqpair=0xd14690 00:24:02.661 [2024-12-10 14:26:03.210413] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:24:02.661 0% 00:24:02.661 Data Units Read: 0 00:24:02.661 Data Units Written: 0 00:24:02.661 Host Read Commands: 0 00:24:02.661 Host Write Commands: 0 00:24:02.661 Controller Busy Time: 0 minutes 00:24:02.661 Power Cycles: 0 00:24:02.661 Power On Hours: 0 hours 00:24:02.661 Unsafe Shutdowns: 0 00:24:02.661 Unrecoverable Media Errors: 0 00:24:02.661 Lifetime Error Log Entries: 0 00:24:02.661 Warning Temperature Time: 0 minutes 00:24:02.661 Critical Temperature Time: 0 minutes 00:24:02.661 00:24:02.661 Number of Queues 00:24:02.661 ================ 00:24:02.661 Number of I/O Submission Queues: 127 00:24:02.661 Number of I/O Completion Queues: 127 00:24:02.661 00:24:02.661 Active Namespaces 00:24:02.661 ================= 00:24:02.661 Namespace ID:1 00:24:02.661 Error Recovery Timeout: Unlimited 00:24:02.661 Command Set Identifier: NVM (00h) 00:24:02.661 Deallocate: Supported 00:24:02.661 Deallocated/Unwritten Error: Not Supported 00:24:02.661 Deallocated Read Value: Unknown 00:24:02.661 Deallocate in Write Zeroes: Not Supported 00:24:02.661 Deallocated Guard Field: 0xFFFF 00:24:02.661 Flush: Supported 00:24:02.661 Reservation: Supported 00:24:02.661 Namespace Sharing Capabilities: Multiple Controllers 00:24:02.661 Size (in LBAs): 131072 (0GiB) 00:24:02.661 Capacity (in LBAs): 131072 (0GiB) 00:24:02.661 Utilization (in LBAs): 131072 (0GiB) 00:24:02.661 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:02.661 EUI64: ABCDEF0123456789 00:24:02.661 UUID: 6a8cc414-7256-47b3-b42a-28d34e27548b 00:24:02.661 Thin Provisioning: Not Supported 00:24:02.661 Per-NS Atomic Units: Yes 00:24:02.661 Atomic Boundary Size (Normal): 0 00:24:02.661 Atomic Boundary Size (PFail): 0 00:24:02.661 Atomic Boundary Offset: 0 00:24:02.661 Maximum Single Source Range Length: 65535 00:24:02.661 Maximum Copy Length: 65535 00:24:02.661 Maximum Source Range Count: 1 00:24:02.661 NGUID/EUI64 Never Reused: No 00:24:02.661 Namespace Write Protected: No 00:24:02.661 Number of LBA Formats: 1 00:24:02.661 Current LBA Format: LBA Format #00 00:24:02.661 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:02.661 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:02.661 rmmod nvme_tcp 00:24:02.661 rmmod nvme_fabrics 00:24:02.661 rmmod nvme_keyring 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1731584 ']' 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1731584 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1731584 ']' 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1731584 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1731584 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1731584' 00:24:02.661 killing process with pid 1731584 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1731584 00:24:02.661 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1731584 00:24:02.920 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:02.920 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:02.920 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:02.920 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:02.920 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:02.920 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:02.920 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:02.920 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:02.920 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:02.920 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.920 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.920 14:26:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:05.455 00:24:05.455 real 0m10.611s 00:24:05.455 user 0m7.909s 00:24:05.455 sys 0m5.443s 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.455 ************************************ 00:24:05.455 END TEST nvmf_identify 00:24:05.455 ************************************ 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.455 ************************************ 00:24:05.455 START TEST nvmf_perf 00:24:05.455 ************************************ 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:05.455 * Looking for test storage... 00:24:05.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:05.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.455 --rc genhtml_branch_coverage=1 00:24:05.455 --rc genhtml_function_coverage=1 00:24:05.455 --rc genhtml_legend=1 00:24:05.455 --rc geninfo_all_blocks=1 00:24:05.455 --rc geninfo_unexecuted_blocks=1 00:24:05.455 00:24:05.455 ' 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:05.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.455 --rc genhtml_branch_coverage=1 00:24:05.455 --rc genhtml_function_coverage=1 00:24:05.455 --rc genhtml_legend=1 00:24:05.455 --rc geninfo_all_blocks=1 00:24:05.455 --rc geninfo_unexecuted_blocks=1 00:24:05.455 00:24:05.455 ' 00:24:05.455 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:05.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.455 --rc genhtml_branch_coverage=1 00:24:05.456 --rc genhtml_function_coverage=1 00:24:05.456 --rc genhtml_legend=1 00:24:05.456 --rc geninfo_all_blocks=1 00:24:05.456 --rc geninfo_unexecuted_blocks=1 00:24:05.456 00:24:05.456 ' 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:05.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.456 --rc genhtml_branch_coverage=1 00:24:05.456 --rc genhtml_function_coverage=1 00:24:05.456 --rc genhtml_legend=1 00:24:05.456 --rc geninfo_all_blocks=1 00:24:05.456 --rc geninfo_unexecuted_blocks=1 00:24:05.456 00:24:05.456 ' 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:05.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:05.456 14:26:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:12.025 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:12.025 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:12.025 Found net devices under 0000:af:00.0: cvl_0_0 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:12.025 Found net devices under 0000:af:00.1: cvl_0_1 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.025 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:12.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:24:12.026 00:24:12.026 --- 10.0.0.2 ping statistics --- 00:24:12.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.026 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:24:12.026 00:24:12.026 --- 10.0.0.1 ping statistics --- 00:24:12.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.026 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:12.026 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:12.284 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:12.284 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:12.284 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:12.284 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:12.284 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1735635 00:24:12.284 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:12.284 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1735635 00:24:12.284 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1735635 ']' 00:24:12.284 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.284 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.284 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.284 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.284 14:26:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:12.284 [2024-12-10 14:26:12.831123] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:24:12.284 [2024-12-10 14:26:12.831163] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.284 [2024-12-10 14:26:12.914402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:12.284 [2024-12-10 14:26:12.955873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.284 [2024-12-10 14:26:12.955908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.284 [2024-12-10 14:26:12.955916] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.284 [2024-12-10 14:26:12.955921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.284 [2024-12-10 14:26:12.955926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.284 [2024-12-10 14:26:12.957354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.284 [2024-12-10 14:26:12.957464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.284 [2024-12-10 14:26:12.957569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.284 [2024-12-10 14:26:12.957570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:12.542 14:26:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.542 14:26:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:12.542 14:26:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:12.542 14:26:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:12.542 14:26:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:12.542 14:26:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.542 14:26:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:12.542 14:26:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:15.821 14:26:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:15.821 14:26:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:15.821 14:26:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:24:15.821 14:26:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:16.079 14:26:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:16.079 14:26:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:24:16.079 14:26:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:16.079 14:26:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:16.079 14:26:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:16.079 [2024-12-10 14:26:16.736558] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.079 14:26:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:16.337 14:26:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:16.337 14:26:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:16.594 14:26:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:16.594 14:26:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:16.852 14:26:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.852 [2024-12-10 14:26:17.552881] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.852 14:26:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:17.110 14:26:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:24:17.110 14:26:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:17.110 14:26:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:17.110 14:26:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:18.482 Initializing NVMe Controllers 00:24:18.482 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:24:18.482 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:24:18.482 Initialization complete. Launching workers. 00:24:18.482 ======================================================== 00:24:18.482 Latency(us) 00:24:18.482 Device Information : IOPS MiB/s Average min max 00:24:18.482 PCIE (0000:5e:00.0) NSID 1 from core 0: 97403.27 380.48 328.03 9.22 6189.50 00:24:18.482 ======================================================== 00:24:18.482 Total : 97403.27 380.48 328.03 9.22 6189.50 00:24:18.482 00:24:18.482 14:26:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:19.854 Initializing NVMe Controllers 00:24:19.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:19.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:19.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:19.854 Initialization complete. Launching workers. 00:24:19.854 ======================================================== 00:24:19.854 Latency(us) 00:24:19.854 Device Information : IOPS MiB/s Average min max 00:24:19.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 158.00 0.62 6403.28 109.09 45035.97 00:24:19.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 76.00 0.30 13207.15 7203.02 47885.90 00:24:19.854 ======================================================== 00:24:19.854 Total : 234.00 0.91 8613.08 109.09 47885.90 00:24:19.854 00:24:19.854 14:26:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:21.226 Initializing NVMe Controllers 00:24:21.226 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:21.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:21.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:21.226 Initialization complete. Launching workers. 00:24:21.226 ======================================================== 00:24:21.226 Latency(us) 00:24:21.226 Device Information : IOPS MiB/s Average min max 00:24:21.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11496.09 44.91 2784.15 469.99 6567.79 00:24:21.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3879.69 15.16 8291.87 6356.88 15805.52 00:24:21.227 ======================================================== 00:24:21.227 Total : 15375.79 60.06 4173.88 469.99 15805.52 00:24:21.227 00:24:21.227 14:26:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:21.227 14:26:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:21.227 14:26:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.755 Initializing NVMe Controllers 00:24:23.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:23.755 Controller IO queue size 128, less than required. 00:24:23.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.755 Controller IO queue size 128, less than required. 00:24:23.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:23.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:23.755 Initialization complete. Launching workers. 00:24:23.755 ======================================================== 00:24:23.755 Latency(us) 00:24:23.755 Device Information : IOPS MiB/s Average min max 00:24:23.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1807.44 451.86 71825.49 54516.44 119581.36 00:24:23.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 626.27 156.57 214505.17 65778.84 326528.88 00:24:23.755 ======================================================== 00:24:23.755 Total : 2433.71 608.43 108541.30 54516.44 326528.88 00:24:23.755 00:24:23.755 14:26:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:23.755 No valid NVMe controllers or AIO or URING devices found 00:24:23.755 Initializing NVMe Controllers 00:24:23.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:23.755 Controller IO queue size 128, less than required. 00:24:23.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.755 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:23.755 Controller IO queue size 128, less than required. 00:24:23.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.755 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:23.755 WARNING: Some requested NVMe devices were skipped 00:24:23.756 14:26:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:26.282 Initializing NVMe Controllers 00:24:26.283 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.283 Controller IO queue size 128, less than required. 00:24:26.283 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:26.283 Controller IO queue size 128, less than required. 00:24:26.283 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:26.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:26.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:26.283 Initialization complete. Launching workers. 00:24:26.283 00:24:26.283 ==================== 00:24:26.283 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:26.283 TCP transport: 00:24:26.283 polls: 11610 00:24:26.283 idle_polls: 8231 00:24:26.283 sock_completions: 3379 00:24:26.283 nvme_completions: 6237 00:24:26.283 submitted_requests: 9346 00:24:26.283 queued_requests: 1 00:24:26.283 00:24:26.283 ==================== 00:24:26.283 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:26.283 TCP transport: 00:24:26.283 polls: 11413 00:24:26.283 idle_polls: 7537 00:24:26.283 sock_completions: 3876 00:24:26.283 nvme_completions: 6969 00:24:26.283 submitted_requests: 10494 00:24:26.283 queued_requests: 1 00:24:26.283 ======================================================== 00:24:26.283 Latency(us) 00:24:26.283 Device Information : IOPS MiB/s Average min max 00:24:26.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1555.77 388.94 84678.98 53727.62 135847.22 00:24:26.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1738.39 434.60 73899.93 45798.36 109196.37 00:24:26.283 ======================================================== 00:24:26.283 Total : 3294.16 823.54 78990.67 45798.36 135847.22 00:24:26.283 00:24:26.283 14:26:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:26.283 14:26:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:26.541 rmmod nvme_tcp 00:24:26.541 rmmod nvme_fabrics 00:24:26.541 rmmod nvme_keyring 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1735635 ']' 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1735635 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1735635 ']' 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1735635 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1735635 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1735635' 00:24:26.541 killing process with pid 1735635 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1735635 00:24:26.541 14:26:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1735635 00:24:28.440 14:26:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.440 14:26:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:28.440 14:26:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:28.440 14:26:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:28.440 14:26:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:28.440 14:26:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:28.440 14:26:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:28.440 14:26:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.440 14:26:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.440 14:26:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.440 14:26:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.440 14:26:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.345 14:26:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:30.345 00:24:30.345 real 0m25.110s 00:24:30.345 user 1m2.904s 00:24:30.345 sys 0m8.868s 00:24:30.345 14:26:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:30.345 14:26:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:30.345 ************************************ 00:24:30.345 END TEST nvmf_perf 00:24:30.345 ************************************ 00:24:30.345 14:26:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:30.345 14:26:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:30.345 14:26:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:30.345 14:26:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.345 ************************************ 00:24:30.345 START TEST nvmf_fio_host 00:24:30.345 ************************************ 00:24:30.345 14:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:30.345 * Looking for test storage... 00:24:30.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.345 14:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:30.345 14:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:30.345 14:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:30.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.345 --rc genhtml_branch_coverage=1 00:24:30.345 --rc genhtml_function_coverage=1 00:24:30.345 --rc genhtml_legend=1 00:24:30.345 --rc geninfo_all_blocks=1 00:24:30.345 --rc geninfo_unexecuted_blocks=1 00:24:30.345 00:24:30.345 ' 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:30.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.345 --rc genhtml_branch_coverage=1 00:24:30.345 --rc genhtml_function_coverage=1 00:24:30.345 --rc genhtml_legend=1 00:24:30.345 --rc geninfo_all_blocks=1 00:24:30.345 --rc geninfo_unexecuted_blocks=1 00:24:30.345 00:24:30.345 ' 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:30.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.345 --rc genhtml_branch_coverage=1 00:24:30.345 --rc genhtml_function_coverage=1 00:24:30.345 --rc genhtml_legend=1 00:24:30.345 --rc geninfo_all_blocks=1 00:24:30.345 --rc geninfo_unexecuted_blocks=1 00:24:30.345 00:24:30.345 ' 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:30.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.345 --rc genhtml_branch_coverage=1 00:24:30.345 --rc genhtml_function_coverage=1 00:24:30.345 --rc genhtml_legend=1 00:24:30.345 --rc geninfo_all_blocks=1 00:24:30.345 --rc geninfo_unexecuted_blocks=1 00:24:30.345 00:24:30.345 ' 00:24:30.345 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:30.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:30.346 14:26:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:36.919 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:36.919 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:36.919 Found net devices under 0000:af:00.0: cvl_0_0 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:36.919 Found net devices under 0000:af:00.1: cvl_0_1 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:36.919 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:36.920 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:37.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:24:37.179 00:24:37.179 --- 10.0.0.2 ping statistics --- 00:24:37.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.179 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:24:37.179 00:24:37.179 --- 10.0.0.1 ping statistics --- 00:24:37.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.179 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1742177 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1742177 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1742177 ']' 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.179 14:26:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.179 [2024-12-10 14:26:37.859740] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:24:37.179 [2024-12-10 14:26:37.859794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.438 [2024-12-10 14:26:37.942932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:37.438 [2024-12-10 14:26:37.981988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.438 [2024-12-10 14:26:37.982025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.438 [2024-12-10 14:26:37.982031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.438 [2024-12-10 14:26:37.982037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.438 [2024-12-10 14:26:37.982042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.438 [2024-12-10 14:26:37.983469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.438 [2024-12-10 14:26:37.983575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.438 [2024-12-10 14:26:37.983684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.438 [2024-12-10 14:26:37.983685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:37.438 14:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:37.438 14:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:37.438 14:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:37.695 [2024-12-10 14:26:38.253578] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.695 14:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:37.695 14:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:37.695 14:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.695 14:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:37.952 Malloc1 00:24:37.952 14:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:38.209 14:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:38.466 14:26:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.466 [2024-12-10 14:26:39.133651] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.466 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:38.724 14:26:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:38.981 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:38.981 fio-3.35 00:24:38.981 Starting 1 thread 00:24:41.508 00:24:41.508 test: (groupid=0, jobs=1): err= 0: pid=1742557: Tue Dec 10 14:26:42 2024 00:24:41.508 read: IOPS=11.9k, BW=46.4MiB/s (48.7MB/s)(93.2MiB/2006msec) 00:24:41.508 slat (nsec): min=1504, max=241376, avg=1711.93, stdev=2203.45 00:24:41.508 clat (usec): min=3122, max=9888, avg=5945.56, stdev=457.64 00:24:41.508 lat (usec): min=3155, max=9889, avg=5947.27, stdev=457.62 00:24:41.508 clat percentiles (usec): 00:24:41.508 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5604], 00:24:41.508 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:24:41.508 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:24:41.508 | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[ 8225], 99.95th=[ 9110], 00:24:41.508 | 99.99th=[ 9896] 00:24:41.508 bw ( KiB/s): min=46456, max=48296, per=100.00%, avg=47564.00, stdev=806.10, samples=4 00:24:41.508 iops : min=11614, max=12074, avg=11891.00, stdev=201.52, samples=4 00:24:41.508 write: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(92.7MiB/2006msec); 0 zone resets 00:24:41.508 slat (nsec): min=1558, max=224431, avg=1774.14, stdev=1649.38 00:24:41.508 clat (usec): min=2430, max=9777, avg=4809.58, stdev=384.74 00:24:41.508 lat (usec): min=2445, max=9779, avg=4811.36, stdev=384.79 00:24:41.508 clat percentiles (usec): 00:24:41.508 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:24:41.508 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:24:41.508 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:24:41.508 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 7767], 99.95th=[ 8979], 00:24:41.508 | 99.99th=[ 9634] 00:24:41.508 bw ( KiB/s): min=47024, max=47872, per=100.00%, avg=47364.00, stdev=384.80, samples=4 00:24:41.508 iops : min=11756, max=11968, avg=11841.00, stdev=96.20, samples=4 00:24:41.508 lat (msec) : 4=0.58%, 10=99.42% 00:24:41.508 cpu : usr=74.76%, sys=24.29%, ctx=84, majf=0, minf=2 00:24:41.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:41.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:41.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:41.508 issued rwts: total=23851,23740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:41.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:41.508 00:24:41.508 Run status group 0 (all jobs): 00:24:41.508 READ: bw=46.4MiB/s (48.7MB/s), 46.4MiB/s-46.4MiB/s (48.7MB/s-48.7MB/s), io=93.2MiB (97.7MB), run=2006-2006msec 00:24:41.508 WRITE: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=92.7MiB (97.2MB), run=2006-2006msec 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:41.508 14:26:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:41.766 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:41.766 fio-3.35 00:24:41.766 Starting 1 thread 00:24:44.294 00:24:44.294 test: (groupid=0, jobs=1): err= 0: pid=1743122: Tue Dec 10 14:26:44 2024 00:24:44.294 read: IOPS=10.8k, BW=169MiB/s (177MB/s)(339MiB/2007msec) 00:24:44.294 slat (nsec): min=2375, max=86265, avg=2835.32, stdev=1191.51 00:24:44.294 clat (usec): min=1524, max=53066, avg=6844.38, stdev=3342.02 00:24:44.294 lat (usec): min=1527, max=53069, avg=6847.22, stdev=3342.04 00:24:44.294 clat percentiles (usec): 00:24:44.294 | 1.00th=[ 3490], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5342], 00:24:44.294 | 30.00th=[ 5800], 40.00th=[ 6128], 50.00th=[ 6587], 60.00th=[ 7046], 00:24:44.294 | 70.00th=[ 7373], 80.00th=[ 7832], 90.00th=[ 8586], 95.00th=[ 9634], 00:24:44.294 | 99.00th=[11207], 99.50th=[13304], 99.90th=[51119], 99.95th=[52691], 00:24:44.294 | 99.99th=[53216] 00:24:44.294 bw ( KiB/s): min=80288, max=100832, per=51.31%, avg=88784.00, stdev=9449.87, samples=4 00:24:44.294 iops : min= 5018, max= 6302, avg=5549.00, stdev=590.62, samples=4 00:24:44.294 write: IOPS=6513, BW=102MiB/s (107MB/s)(181MiB/1783msec); 0 zone resets 00:24:44.294 slat (usec): min=28, max=259, avg=31.99, stdev= 5.80 00:24:44.294 clat (usec): min=3462, max=55844, avg=8577.09, stdev=2608.43 00:24:44.294 lat (usec): min=3494, max=55874, avg=8609.08, stdev=2608.76 00:24:44.294 clat percentiles (usec): 00:24:44.294 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7308], 00:24:44.294 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8455], 60.00th=[ 8717], 00:24:44.294 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10290], 95.00th=[10945], 00:24:44.294 | 99.00th=[11994], 99.50th=[12649], 99.90th=[55313], 99.95th=[55313], 00:24:44.294 | 99.99th=[55837] 00:24:44.294 bw ( KiB/s): min=84064, max=104992, per=88.79%, avg=92528.00, stdev=9586.57, samples=4 00:24:44.294 iops : min= 5254, max= 6562, avg=5783.00, stdev=599.16, samples=4 00:24:44.294 lat (msec) : 2=0.07%, 4=1.88%, 10=90.91%, 20=6.76%, 50=0.13% 00:24:44.294 lat (msec) : 100=0.25% 00:24:44.294 cpu : usr=86.34%, sys=12.96%, ctx=40, majf=0, minf=2 00:24:44.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:44.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:44.294 issued rwts: total=21703,11613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.294 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:44.294 00:24:44.294 Run status group 0 (all jobs): 00:24:44.294 READ: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=339MiB (356MB), run=2007-2007msec 00:24:44.294 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=181MiB (190MB), run=1783-1783msec 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:44.294 rmmod nvme_tcp 00:24:44.294 rmmod nvme_fabrics 00:24:44.294 rmmod nvme_keyring 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1742177 ']' 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1742177 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1742177 ']' 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1742177 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:44.294 14:26:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1742177 00:24:44.294 14:26:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:44.294 14:26:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:44.294 14:26:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1742177' 00:24:44.294 killing process with pid 1742177 00:24:44.294 14:26:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1742177 00:24:44.294 14:26:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1742177 00:24:44.552 14:26:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:44.552 14:26:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:44.552 14:26:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:44.552 14:26:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:44.552 14:26:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:44.552 14:26:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:44.552 14:26:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:44.552 14:26:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:44.552 14:26:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:44.552 14:26:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.552 14:26:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.552 14:26:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.200 14:26:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:47.200 00:24:47.200 real 0m16.398s 00:24:47.200 user 0m46.375s 00:24:47.200 sys 0m7.263s 00:24:47.200 14:26:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:47.200 14:26:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.200 ************************************ 00:24:47.200 END TEST nvmf_fio_host 00:24:47.200 ************************************ 00:24:47.200 14:26:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:47.200 14:26:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:47.200 14:26:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:47.200 14:26:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.200 ************************************ 00:24:47.200 START TEST nvmf_failover 00:24:47.200 ************************************ 00:24:47.200 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:47.200 * Looking for test storage... 00:24:47.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:47.200 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:47.200 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:47.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.201 --rc genhtml_branch_coverage=1 00:24:47.201 --rc genhtml_function_coverage=1 00:24:47.201 --rc genhtml_legend=1 00:24:47.201 --rc geninfo_all_blocks=1 00:24:47.201 --rc geninfo_unexecuted_blocks=1 00:24:47.201 00:24:47.201 ' 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:47.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.201 --rc genhtml_branch_coverage=1 00:24:47.201 --rc genhtml_function_coverage=1 00:24:47.201 --rc genhtml_legend=1 00:24:47.201 --rc geninfo_all_blocks=1 00:24:47.201 --rc geninfo_unexecuted_blocks=1 00:24:47.201 00:24:47.201 ' 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:47.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.201 --rc genhtml_branch_coverage=1 00:24:47.201 --rc genhtml_function_coverage=1 00:24:47.201 --rc genhtml_legend=1 00:24:47.201 --rc geninfo_all_blocks=1 00:24:47.201 --rc geninfo_unexecuted_blocks=1 00:24:47.201 00:24:47.201 ' 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:47.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.201 --rc genhtml_branch_coverage=1 00:24:47.201 --rc genhtml_function_coverage=1 00:24:47.201 --rc genhtml_legend=1 00:24:47.201 --rc geninfo_all_blocks=1 00:24:47.201 --rc geninfo_unexecuted_blocks=1 00:24:47.201 00:24:47.201 ' 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:47.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.201 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.202 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:47.202 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:47.202 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:47.202 14:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.793 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:53.794 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:53.794 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:53.794 Found net devices under 0000:af:00.0: cvl_0_0 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:53.794 Found net devices under 0000:af:00.1: cvl_0_1 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:53.794 14:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:53.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:24:53.794 00:24:53.794 --- 10.0.0.2 ping statistics --- 00:24:53.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.794 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:24:53.794 00:24:53.794 --- 10.0.0.1 ping statistics --- 00:24:53.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.794 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1747434 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1747434 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1747434 ']' 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:53.794 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:53.794 [2024-12-10 14:26:54.345182] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:24:53.794 [2024-12-10 14:26:54.345249] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.794 [2024-12-10 14:26:54.432586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:53.795 [2024-12-10 14:26:54.471101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.795 [2024-12-10 14:26:54.471137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.795 [2024-12-10 14:26:54.471144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.795 [2024-12-10 14:26:54.471151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.795 [2024-12-10 14:26:54.471157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.795 [2024-12-10 14:26:54.472555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.795 [2024-12-10 14:26:54.472663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.795 [2024-12-10 14:26:54.472664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:54.052 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.052 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:54.052 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:54.052 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:54.052 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:54.052 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.052 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:54.052 [2024-12-10 14:26:54.778335] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.310 14:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:54.310 Malloc0 00:24:54.310 14:26:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:54.566 14:26:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:54.822 14:26:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.822 [2024-12-10 14:26:55.558470] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.079 14:26:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:55.079 [2024-12-10 14:26:55.762998] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:55.079 14:26:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:55.336 [2024-12-10 14:26:55.971688] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:55.336 14:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:55.336 14:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1747821 00:24:55.336 14:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:55.336 14:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1747821 /var/tmp/bdevperf.sock 00:24:55.336 14:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1747821 ']' 00:24:55.336 14:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:55.336 14:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:55.336 14:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:55.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:55.336 14:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:55.336 14:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:55.593 14:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.593 14:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:55.593 14:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:56.158 NVMe0n1 00:24:56.158 14:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:56.416 00:24:56.416 14:26:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1747885 00:24:56.416 14:26:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:56.416 14:26:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:57.347 14:26:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.604 14:26:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:00.883 14:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:00.883 00:25:00.883 14:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:01.141 14:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:04.418 14:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.418 [2024-12-10 14:27:04.967598] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.418 14:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:05.349 14:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:05.608 [2024-12-10 14:27:06.190466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.608 [2024-12-10 14:27:06.190721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.190996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 [2024-12-10 14:27:06.191199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5980 is same with the state(6) to be set 00:25:05.609 14:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1747885 00:25:12.175 { 00:25:12.175 "results": [ 00:25:12.175 { 00:25:12.175 "job": "NVMe0n1", 00:25:12.175 "core_mask": "0x1", 00:25:12.175 "workload": "verify", 00:25:12.175 "status": "finished", 00:25:12.175 "verify_range": { 00:25:12.175 "start": 0, 00:25:12.175 "length": 16384 00:25:12.175 }, 00:25:12.175 "queue_depth": 128, 00:25:12.175 "io_size": 4096, 00:25:12.175 "runtime": 15.006229, 00:25:12.175 "iops": 11348.620629473267, 00:25:12.175 "mibps": 44.33054933387995, 00:25:12.175 "io_failed": 4781, 00:25:12.175 "io_timeout": 0, 00:25:12.175 "avg_latency_us": 10949.153972020024, 00:25:12.175 "min_latency_us": 403.74857142857144, 00:25:12.175 "max_latency_us": 22344.655238095238 00:25:12.175 } 00:25:12.175 ], 00:25:12.175 "core_count": 1 00:25:12.175 } 00:25:12.175 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1747821 00:25:12.175 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1747821 ']' 00:25:12.175 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1747821 00:25:12.175 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:12.175 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.175 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1747821 00:25:12.175 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:12.175 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:12.175 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1747821' 00:25:12.175 killing process with pid 1747821 00:25:12.175 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1747821 00:25:12.175 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1747821 00:25:12.175 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:12.175 [2024-12-10 14:26:56.028986] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:25:12.175 [2024-12-10 14:26:56.029034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747821 ] 00:25:12.175 [2024-12-10 14:26:56.105177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.175 [2024-12-10 14:26:56.145171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.175 Running I/O for 15 seconds... 00:25:12.175 11663.00 IOPS, 45.56 MiB/s [2024-12-10T13:27:12.915Z] [2024-12-10 14:26:58.224528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.175 [2024-12-10 14:26:58.224845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.175 [2024-12-10 14:26:58.224851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.224859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.224866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.224874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.224881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.224890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.224897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.224905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.224912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.224920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.224928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.224936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.224943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.224951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.224958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.224966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.224972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.224980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.224987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.224995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.225002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.225016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.225030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.225044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.225059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.225074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.225088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.225103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.176 [2024-12-10 14:26:58.225233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.176 [2024-12-10 14:26:58.225399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.176 [2024-12-10 14:26:58.225405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.177 [2024-12-10 14:26:58.225989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.177 [2024-12-10 14:26:58.225997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.178 [2024-12-10 14:26:58.226003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.178 [2024-12-10 14:26:58.226018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.178 [2024-12-10 14:26:58.226033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.178 [2024-12-10 14:26:58.226048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.178 [2024-12-10 14:26:58.226063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.178 [2024-12-10 14:26:58.226077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.178 [2024-12-10 14:26:58.226092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.178 [2024-12-10 14:26:58.226106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.178 [2024-12-10 14:26:58.226120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.178 [2024-12-10 14:26:58.226134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.178 [2024-12-10 14:26:58.226149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.178 [2024-12-10 14:26:58.226163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.178 [2024-12-10 14:26:58.226177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.178 [2024-12-10 14:26:58.226205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101192 len:8 PRP1 0x0 PRP2 0x0 00:25:12.178 [2024-12-10 14:26:58.226211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.178 [2024-12-10 14:26:58.226229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.178 [2024-12-10 14:26:58.226235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101200 len:8 PRP1 0x0 PRP2 0x0 00:25:12.178 [2024-12-10 14:26:58.226243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.178 [2024-12-10 14:26:58.226258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.178 [2024-12-10 14:26:58.226263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101208 len:8 PRP1 0x0 PRP2 0x0 00:25:12.178 [2024-12-10 14:26:58.226270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.178 [2024-12-10 14:26:58.226281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.178 [2024-12-10 14:26:58.226287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101216 len:8 PRP1 0x0 PRP2 0x0 00:25:12.178 [2024-12-10 14:26:58.226293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.178 [2024-12-10 14:26:58.226305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.178 [2024-12-10 14:26:58.226312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101224 len:8 PRP1 0x0 PRP2 0x0 00:25:12.178 [2024-12-10 14:26:58.226318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.178 [2024-12-10 14:26:58.226330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.178 [2024-12-10 14:26:58.226335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101232 len:8 PRP1 0x0 PRP2 0x0 00:25:12.178 [2024-12-10 14:26:58.226341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.178 [2024-12-10 14:26:58.226352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.178 [2024-12-10 14:26:58.226357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101240 len:8 PRP1 0x0 PRP2 0x0 00:25:12.178 [2024-12-10 14:26:58.226364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.178 [2024-12-10 14:26:58.226375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.178 [2024-12-10 14:26:58.226380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101248 len:8 PRP1 0x0 PRP2 0x0 00:25:12.178 [2024-12-10 14:26:58.226386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.178 [2024-12-10 14:26:58.226397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.178 [2024-12-10 14:26:58.226403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101256 len:8 PRP1 0x0 PRP2 0x0 00:25:12.178 [2024-12-10 14:26:58.226409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.178 [2024-12-10 14:26:58.226420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.178 [2024-12-10 14:26:58.226426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101264 len:8 PRP1 0x0 PRP2 0x0 00:25:12.178 [2024-12-10 14:26:58.226433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.178 [2024-12-10 14:26:58.226445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.178 [2024-12-10 14:26:58.226450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101272 len:8 PRP1 0x0 PRP2 0x0 00:25:12.178 [2024-12-10 14:26:58.226456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.178 [2024-12-10 14:26:58.226467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.178 [2024-12-10 14:26:58.226473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101280 len:8 PRP1 0x0 PRP2 0x0 00:25:12.178 [2024-12-10 14:26:58.226479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.178 [2024-12-10 14:26:58.226490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.178 [2024-12-10 14:26:58.226497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101288 len:8 PRP1 0x0 PRP2 0x0 00:25:12.178 [2024-12-10 14:26:58.226503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.178 [2024-12-10 14:26:58.226514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.178 [2024-12-10 14:26:58.226519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101296 len:8 PRP1 0x0 PRP2 0x0 00:25:12.178 [2024-12-10 14:26:58.226525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.178 [2024-12-10 14:26:58.226536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.178 [2024-12-10 14:26:58.226541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101304 len:8 PRP1 0x0 PRP2 0x0 00:25:12.178 [2024-12-10 14:26:58.226548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.178 [2024-12-10 14:26:58.226558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.178 [2024-12-10 14:26:58.226563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101312 len:8 PRP1 0x0 PRP2 0x0 00:25:12.178 [2024-12-10 14:26:58.226569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.178 [2024-12-10 14:26:58.226576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.178 [2024-12-10 14:26:58.226581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.179 [2024-12-10 14:26:58.226586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101320 len:8 PRP1 0x0 PRP2 0x0 00:25:12.179 [2024-12-10 14:26:58.226592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:26:58.226600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.179 [2024-12-10 14:26:58.226605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.179 [2024-12-10 14:26:58.226610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101328 len:8 PRP1 0x0 PRP2 0x0 00:25:12.179 [2024-12-10 14:26:58.226615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:26:58.226622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.179 [2024-12-10 14:26:58.226628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.179 [2024-12-10 14:26:58.226633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101336 len:8 PRP1 0x0 PRP2 0x0 00:25:12.179 [2024-12-10 14:26:58.226639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:26:58.226683] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:12.179 [2024-12-10 14:26:58.226705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.179 [2024-12-10 14:26:58.226712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:26:58.226719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.179 [2024-12-10 14:26:58.226726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:26:58.226732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.179 [2024-12-10 14:26:58.226741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:26:58.226748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.179 [2024-12-10 14:26:58.226754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:26:58.226761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:12.179 [2024-12-10 14:26:58.229537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:12.179 [2024-12-10 14:26:58.229563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10418d0 (9): Bad file descriptor 00:25:12.179 [2024-12-10 14:26:58.266500] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:12.179 11333.00 IOPS, 44.27 MiB/s [2024-12-10T13:27:12.919Z] 11387.67 IOPS, 44.48 MiB/s [2024-12-10T13:27:12.919Z] 11401.75 IOPS, 44.54 MiB/s [2024-12-10T13:27:12.919Z] [2024-12-10 14:27:01.744147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.179 [2024-12-10 14:27:01.744188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.744198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.179 [2024-12-10 14:27:01.744205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.744213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.179 [2024-12-10 14:27:01.744227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.744240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.179 [2024-12-10 14:27:01.744248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.744255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10418d0 is same with the state(6) to be set 00:25:12.179 [2024-12-10 14:27:01.744913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.744930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.744942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.744950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.744959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.744965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.744974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.744981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.744990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.744997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.745006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.745012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.745020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.745027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.745035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.745042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.745050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.745057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.745065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.745072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.745080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.745087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.745094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.745105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.745114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.745121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.745130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.745136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.745144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.745151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.745159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.745168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.745176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.745183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.179 [2024-12-10 14:27:01.745191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.179 [2024-12-10 14:27:01.745198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.180 [2024-12-10 14:27:01.745803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.180 [2024-12-10 14:27:01.745812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.181 [2024-12-10 14:27:01.745818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.745826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.181 [2024-12-10 14:27:01.745833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.745841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.181 [2024-12-10 14:27:01.745848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.745855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.181 [2024-12-10 14:27:01.745862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.745872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.181 [2024-12-10 14:27:01.745878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.745887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.181 [2024-12-10 14:27:01.745893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.745901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.181 [2024-12-10 14:27:01.745909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.745917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.181 [2024-12-10 14:27:01.745923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.745931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.181 [2024-12-10 14:27:01.745938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.745946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.181 [2024-12-10 14:27:01.745953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.745961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.181 [2024-12-10 14:27:01.745968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.745977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.745984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.745992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.745998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.181 [2024-12-10 14:27:01.746210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.181 [2024-12-10 14:27:01.746411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.181 [2024-12-10 14:27:01.746420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.182 [2024-12-10 14:27:01.746426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.182 [2024-12-10 14:27:01.746445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.182 [2024-12-10 14:27:01.746459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.182 [2024-12-10 14:27:01.746475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.182 [2024-12-10 14:27:01.746490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.182 [2024-12-10 14:27:01.746505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.182 [2024-12-10 14:27:01.746748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:01.746853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.182 [2024-12-10 14:27:01.746879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.182 [2024-12-10 14:27:01.746885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41344 len:8 PRP1 0x0 PRP2 0x0 00:25:12.182 [2024-12-10 14:27:01.746892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:01.746935] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:12.182 [2024-12-10 14:27:01.746944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:12.182 [2024-12-10 14:27:01.749707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:12.182 [2024-12-10 14:27:01.749735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10418d0 (9): Bad file descriptor 00:25:12.182 [2024-12-10 14:27:01.777485] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:12.182 11329.20 IOPS, 44.25 MiB/s [2024-12-10T13:27:12.922Z] 11313.83 IOPS, 44.19 MiB/s [2024-12-10T13:27:12.922Z] 11338.57 IOPS, 44.29 MiB/s [2024-12-10T13:27:12.922Z] 11336.25 IOPS, 44.28 MiB/s [2024-12-10T13:27:12.922Z] 11335.67 IOPS, 44.28 MiB/s [2024-12-10T13:27:12.922Z] [2024-12-10 14:27:06.192327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:06.192360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:06.192376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:06.192384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:06.192393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:06.192400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:06.192408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:06.192416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:06.192424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.182 [2024-12-10 14:27:06.192432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.182 [2024-12-10 14:27:06.192440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.192992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.192998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.193006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.193013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.193021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.193027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.193037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.193044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.193052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.183 [2024-12-10 14:27:06.193058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.183 [2024-12-10 14:27:06.193066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.184 [2024-12-10 14:27:06.193072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.184 [2024-12-10 14:27:06.193087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.184 [2024-12-10 14:27:06.193102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.184 [2024-12-10 14:27:06.193116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.184 [2024-12-10 14:27:06.193130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.184 [2024-12-10 14:27:06.193144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.184 [2024-12-10 14:27:06.193158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.184 [2024-12-10 14:27:06.193172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.184 [2024-12-10 14:27:06.193186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.184 [2024-12-10 14:27:06.193200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.184 [2024-12-10 14:27:06.193321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.184 [2024-12-10 14:27:06.193566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.184 [2024-12-10 14:27:06.193573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.193987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.193995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.194001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.194009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.185 [2024-12-10 14:27:06.194015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.194036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.185 [2024-12-10 14:27:06.194044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60048 len:8 PRP1 0x0 PRP2 0x0 00:25:12.185 [2024-12-10 14:27:06.194050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.194059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.185 [2024-12-10 14:27:06.194064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.185 [2024-12-10 14:27:06.194071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60056 len:8 PRP1 0x0 PRP2 0x0 00:25:12.185 [2024-12-10 14:27:06.194078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.194084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.185 [2024-12-10 14:27:06.194089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.185 [2024-12-10 14:27:06.194094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60064 len:8 PRP1 0x0 PRP2 0x0 00:25:12.185 [2024-12-10 14:27:06.194100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.194107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.185 [2024-12-10 14:27:06.194112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.185 [2024-12-10 14:27:06.194118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60072 len:8 PRP1 0x0 PRP2 0x0 00:25:12.185 [2024-12-10 14:27:06.194125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.194131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.185 [2024-12-10 14:27:06.194136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.185 [2024-12-10 14:27:06.194141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60080 len:8 PRP1 0x0 PRP2 0x0 00:25:12.185 [2024-12-10 14:27:06.194147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.194155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.185 [2024-12-10 14:27:06.194160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.185 [2024-12-10 14:27:06.194166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60088 len:8 PRP1 0x0 PRP2 0x0 00:25:12.185 [2024-12-10 14:27:06.194172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.185 [2024-12-10 14:27:06.194178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.185 [2024-12-10 14:27:06.194183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.186 [2024-12-10 14:27:06.194188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60096 len:8 PRP1 0x0 PRP2 0x0 00:25:12.186 [2024-12-10 14:27:06.194194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.186 [2024-12-10 14:27:06.194201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.186 [2024-12-10 14:27:06.194206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.186 [2024-12-10 14:27:06.194211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60104 len:8 PRP1 0x0 PRP2 0x0 00:25:12.186 [2024-12-10 14:27:06.194222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.186 [2024-12-10 14:27:06.194229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.186 [2024-12-10 14:27:06.194234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.186 [2024-12-10 14:27:06.194240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60112 len:8 PRP1 0x0 PRP2 0x0 00:25:12.186 [2024-12-10 14:27:06.194246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.186 [2024-12-10 14:27:06.194253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.186 [2024-12-10 14:27:06.194258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.186 [2024-12-10 14:27:06.194263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60120 len:8 PRP1 0x0 PRP2 0x0 00:25:12.186 [2024-12-10 14:27:06.194269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.186 [2024-12-10 14:27:06.194276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.186 [2024-12-10 14:27:06.194281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.186 [2024-12-10 14:27:06.194286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60128 len:8 PRP1 0x0 PRP2 0x0 00:25:12.186 [2024-12-10 14:27:06.194294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.186 [2024-12-10 14:27:06.194301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.186 [2024-12-10 14:27:06.194306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.186 [2024-12-10 14:27:06.194311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60136 len:8 PRP1 0x0 PRP2 0x0 00:25:12.186 [2024-12-10 14:27:06.194317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.186 [2024-12-10 14:27:06.194323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.186 [2024-12-10 14:27:06.194328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.186 [2024-12-10 14:27:06.194334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60144 len:8 PRP1 0x0 PRP2 0x0 00:25:12.186 [2024-12-10 14:27:06.194342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.186 [2024-12-10 14:27:06.194348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.186 [2024-12-10 14:27:06.194353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.186 [2024-12-10 14:27:06.194358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60152 len:8 PRP1 0x0 PRP2 0x0 00:25:12.186 [2024-12-10 14:27:06.194364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.186 [2024-12-10 14:27:06.206562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.186 [2024-12-10 14:27:06.206572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.186 [2024-12-10 14:27:06.206580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60160 len:8 PRP1 0x0 PRP2 0x0 00:25:12.186 [2024-12-10 14:27:06.206589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.186 [2024-12-10 14:27:06.206598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.186 [2024-12-10 14:27:06.206604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.186 [2024-12-10 14:27:06.206611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60168 len:8 PRP1 0x0 PRP2 0x0 00:25:12.186 [2024-12-10 14:27:06.206619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.186 [2024-12-10 14:27:06.206667] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:12.186 [2024-12-10 14:27:06.206694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.186 [2024-12-10 14:27:06.206704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.186 [2024-12-10 14:27:06.206715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.186 [2024-12-10 14:27:06.206723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.186 [2024-12-10 14:27:06.206732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.186 [2024-12-10 14:27:06.206740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.186 [2024-12-10 14:27:06.206750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.186 [2024-12-10 14:27:06.206758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.186 [2024-12-10 14:27:06.206767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:12.186 [2024-12-10 14:27:06.206794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10418d0 (9): Bad file descriptor 00:25:12.186 [2024-12-10 14:27:06.210428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:12.186 [2024-12-10 14:27:06.239409] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:12.186 11288.80 IOPS, 44.10 MiB/s [2024-12-10T13:27:12.926Z] 11303.36 IOPS, 44.15 MiB/s [2024-12-10T13:27:12.926Z] 11327.33 IOPS, 44.25 MiB/s [2024-12-10T13:27:12.926Z] 11334.46 IOPS, 44.28 MiB/s [2024-12-10T13:27:12.926Z] 11340.00 IOPS, 44.30 MiB/s 00:25:12.186 Latency(us) 00:25:12.186 [2024-12-10T13:27:12.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.186 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:12.186 Verification LBA range: start 0x0 length 0x4000 00:25:12.186 NVMe0n1 : 15.01 11348.62 44.33 318.60 0.00 10949.15 403.75 22344.66 00:25:12.186 [2024-12-10T13:27:12.926Z] =================================================================================================================== 00:25:12.186 [2024-12-10T13:27:12.926Z] Total : 11348.62 44.33 318.60 0.00 10949.15 403.75 22344.66 00:25:12.186 Received shutdown signal, test time was about 15.000000 seconds 00:25:12.186 00:25:12.186 Latency(us) 00:25:12.186 [2024-12-10T13:27:12.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.186 [2024-12-10T13:27:12.926Z] =================================================================================================================== 00:25:12.186 [2024-12-10T13:27:12.926Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:12.186 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:12.186 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:12.186 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:12.186 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1750836 00:25:12.186 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1750836 /var/tmp/bdevperf.sock 00:25:12.186 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:12.186 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1750836 ']' 00:25:12.186 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:12.186 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:12.186 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:12.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:12.186 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:12.186 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:12.186 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:12.186 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:12.186 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:12.186 [2024-12-10 14:27:12.874160] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:12.186 14:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:12.445 [2024-12-10 14:27:13.070695] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:12.445 14:27:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:13.010 NVMe0n1 00:25:13.010 14:27:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:13.268 00:25:13.268 14:27:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:13.833 00:25:13.833 14:27:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:13.833 14:27:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:13.833 14:27:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:14.090 14:27:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:17.369 14:27:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.369 14:27:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:17.369 14:27:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1751743 00:25:17.369 14:27:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:17.369 14:27:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1751743 00:25:18.302 { 00:25:18.302 "results": [ 00:25:18.302 { 00:25:18.302 "job": "NVMe0n1", 00:25:18.302 "core_mask": "0x1", 00:25:18.302 "workload": "verify", 00:25:18.302 "status": "finished", 00:25:18.302 "verify_range": { 00:25:18.302 "start": 0, 00:25:18.302 "length": 16384 00:25:18.302 }, 00:25:18.302 "queue_depth": 128, 00:25:18.302 "io_size": 4096, 00:25:18.302 "runtime": 1.008197, 00:25:18.302 "iops": 11248.793638544848, 00:25:18.302 "mibps": 43.940600150565814, 00:25:18.302 "io_failed": 0, 00:25:18.302 "io_timeout": 0, 00:25:18.302 "avg_latency_us": 11328.091914965085, 00:25:18.302 "min_latency_us": 2324.967619047619, 00:25:18.302 "max_latency_us": 15541.394285714287 00:25:18.302 } 00:25:18.302 ], 00:25:18.302 "core_count": 1 00:25:18.302 } 00:25:18.302 14:27:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:18.302 [2024-12-10 14:27:12.478061] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:25:18.302 [2024-12-10 14:27:12.478113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750836 ] 00:25:18.302 [2024-12-10 14:27:12.559129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.302 [2024-12-10 14:27:12.595642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.302 [2024-12-10 14:27:14.695396] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:18.302 [2024-12-10 14:27:14.695440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.302 [2024-12-10 14:27:14.695450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.302 [2024-12-10 14:27:14.695459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.302 [2024-12-10 14:27:14.695466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.302 [2024-12-10 14:27:14.695474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.302 [2024-12-10 14:27:14.695481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.302 [2024-12-10 14:27:14.695488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.302 [2024-12-10 14:27:14.695494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.302 [2024-12-10 14:27:14.695501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:18.302 [2024-12-10 14:27:14.695527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:18.302 [2024-12-10 14:27:14.695541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254b8d0 (9): Bad file descriptor 00:25:18.302 [2024-12-10 14:27:14.706054] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:18.302 Running I/O for 1 seconds... 00:25:18.302 11213.00 IOPS, 43.80 MiB/s 00:25:18.302 Latency(us) 00:25:18.302 [2024-12-10T13:27:19.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.302 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:18.302 Verification LBA range: start 0x0 length 0x4000 00:25:18.302 NVMe0n1 : 1.01 11248.79 43.94 0.00 0.00 11328.09 2324.97 15541.39 00:25:18.302 [2024-12-10T13:27:19.042Z] =================================================================================================================== 00:25:18.302 [2024-12-10T13:27:19.042Z] Total : 11248.79 43.94 0.00 0.00 11328.09 2324.97 15541.39 00:25:18.302 14:27:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:18.302 14:27:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:18.559 14:27:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.853 14:27:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:18.853 14:27:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:19.111 14:27:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:19.111 14:27:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:22.390 14:27:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.390 14:27:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:22.390 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1750836 00:25:22.390 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1750836 ']' 00:25:22.390 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1750836 00:25:22.390 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:22.390 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:22.390 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1750836 00:25:22.390 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:22.390 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:22.390 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1750836' 00:25:22.390 killing process with pid 1750836 00:25:22.390 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1750836 00:25:22.390 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1750836 00:25:22.648 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:22.648 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:22.905 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:22.905 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:22.905 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:22.905 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:22.905 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:22.905 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:22.906 rmmod nvme_tcp 00:25:22.906 rmmod nvme_fabrics 00:25:22.906 rmmod nvme_keyring 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1747434 ']' 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1747434 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1747434 ']' 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1747434 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1747434 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1747434' 00:25:22.906 killing process with pid 1747434 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1747434 00:25:22.906 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1747434 00:25:23.164 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:23.164 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:23.164 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:23.164 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:23.164 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:23.164 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:23.164 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:23.164 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:23.164 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:23.164 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.164 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.164 14:27:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.699 14:27:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:25.699 00:25:25.699 real 0m38.497s 00:25:25.699 user 1m59.700s 00:25:25.699 sys 0m8.527s 00:25:25.699 14:27:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:25.699 14:27:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:25.699 ************************************ 00:25:25.699 END TEST nvmf_failover 00:25:25.699 ************************************ 00:25:25.699 14:27:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:25.699 14:27:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:25.699 14:27:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:25.699 14:27:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.699 ************************************ 00:25:25.699 START TEST nvmf_host_discovery 00:25:25.699 ************************************ 00:25:25.699 14:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:25.699 * Looking for test storage... 00:25:25.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:25.699 14:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:25.699 14:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:25:25.699 14:27:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:25.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.699 --rc genhtml_branch_coverage=1 00:25:25.699 --rc genhtml_function_coverage=1 00:25:25.699 --rc genhtml_legend=1 00:25:25.699 --rc geninfo_all_blocks=1 00:25:25.699 --rc geninfo_unexecuted_blocks=1 00:25:25.699 00:25:25.699 ' 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:25.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.699 --rc genhtml_branch_coverage=1 00:25:25.699 --rc genhtml_function_coverage=1 00:25:25.699 --rc genhtml_legend=1 00:25:25.699 --rc geninfo_all_blocks=1 00:25:25.699 --rc geninfo_unexecuted_blocks=1 00:25:25.699 00:25:25.699 ' 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:25.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.699 --rc genhtml_branch_coverage=1 00:25:25.699 --rc genhtml_function_coverage=1 00:25:25.699 --rc genhtml_legend=1 00:25:25.699 --rc geninfo_all_blocks=1 00:25:25.699 --rc geninfo_unexecuted_blocks=1 00:25:25.699 00:25:25.699 ' 00:25:25.699 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:25.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.699 --rc genhtml_branch_coverage=1 00:25:25.699 --rc genhtml_function_coverage=1 00:25:25.699 --rc genhtml_legend=1 00:25:25.699 --rc geninfo_all_blocks=1 00:25:25.700 --rc geninfo_unexecuted_blocks=1 00:25:25.700 00:25:25.700 ' 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:25.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:25.700 14:27:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:32.268 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:32.268 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:32.268 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:32.269 Found net devices under 0000:af:00.0: cvl_0_0 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:32.269 Found net devices under 0000:af:00.1: cvl_0_1 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:32.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:25:32.269 00:25:32.269 --- 10.0.0.2 ping statistics --- 00:25:32.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.269 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:25:32.269 00:25:32.269 --- 10.0.0.1 ping statistics --- 00:25:32.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.269 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1756654 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1756654 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1756654 ']' 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.269 14:27:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.269 [2024-12-10 14:27:32.930992] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:25:32.269 [2024-12-10 14:27:32.931040] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.529 [2024-12-10 14:27:33.016043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.529 [2024-12-10 14:27:33.056450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.529 [2024-12-10 14:27:33.056484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.529 [2024-12-10 14:27:33.056494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.529 [2024-12-10 14:27:33.056500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.529 [2024-12-10 14:27:33.056505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.529 [2024-12-10 14:27:33.057036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.529 [2024-12-10 14:27:33.192623] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.529 [2024-12-10 14:27:33.204776] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.529 null0 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.529 null1 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1756673 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1756673 /tmp/host.sock 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1756673 ']' 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:32.529 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.529 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.788 [2024-12-10 14:27:33.286787] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:25:32.788 [2024-12-10 14:27:33.286826] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756673 ] 00:25:32.788 [2024-12-10 14:27:33.364603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.788 [2024-12-10 14:27:33.404132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.788 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.788 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:32.788 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:32.788 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:32.788 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.788 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.788 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.788 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:32.788 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.788 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.047 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.306 [2024-12-10 14:27:33.838395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.306 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:33.307 14:27:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.307 14:27:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:33.307 14:27:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:33.875 [2024-12-10 14:27:34.579659] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:33.875 [2024-12-10 14:27:34.579678] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:33.875 [2024-12-10 14:27:34.579691] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:34.133 [2024-12-10 14:27:34.708075] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:34.392 [2024-12-10 14:27:34.932179] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:34.392 [2024-12-10 14:27:34.932799] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x127a260:1 started. 00:25:34.392 [2024-12-10 14:27:34.934159] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:34.392 [2024-12-10 14:27:34.934174] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:34.392 [2024-12-10 14:27:34.938461] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x127a260 was disconnected and freed. delete nvme_qpair. 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:34.392 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.651 [2024-12-10 14:27:35.234296] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x12489f0:1 started. 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:34.651 [2024-12-10 14:27:35.239148] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x12489f0 was disconnected and freed. delete nvme_qpair. 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.651 [2024-12-10 14:27:35.338401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:34.651 [2024-12-10 14:27:35.338629] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:34.651 [2024-12-10 14:27:35.338648] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.651 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.909 [2024-12-10 14:27:35.465012] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:34.909 14:27:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:35.168 [2024-12-10 14:27:35.725094] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:35.168 [2024-12-10 14:27:35.725127] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:35.168 [2024-12-10 14:27:35.725135] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:35.168 [2024-12-10 14:27:35.725140] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:35.788 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:35.788 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:35.788 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:35.788 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:35.788 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:35.788 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.788 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:35.788 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.788 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:35.788 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.058 [2024-12-10 14:27:36.586344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.058 [2024-12-10 14:27:36.586368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.058 [2024-12-10 14:27:36.586377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.058 [2024-12-10 14:27:36.586383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.058 [2024-12-10 14:27:36.586390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.058 [2024-12-10 14:27:36.586396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.058 [2024-12-10 14:27:36.586403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.058 [2024-12-10 14:27:36.586409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.058 [2024-12-10 14:27:36.586415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124a710 is same with the state(6) to be set 00:25:36.058 [2024-12-10 14:27:36.586481] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:36.058 [2024-12-10 14:27:36.586493] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:36.058 [2024-12-10 14:27:36.596355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124a710 (9): Bad file descriptor 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:36.058 [2024-12-10 14:27:36.606390] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:36.058 [2024-12-10 14:27:36.606406] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:36.058 [2024-12-10 14:27:36.606413] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:36.058 [2024-12-10 14:27:36.606418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:36.058 [2024-12-10 14:27:36.606435] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:36.058 [2024-12-10 14:27:36.606654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-12-10 14:27:36.606668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124a710 with addr=10.0.0.2, port=4420 00:25:36.058 [2024-12-10 14:27:36.606676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124a710 is same with the state(6) to be set 00:25:36.058 [2024-12-10 14:27:36.606688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124a710 (9): Bad file descriptor 00:25:36.058 [2024-12-10 14:27:36.606705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:36.058 [2024-12-10 14:27:36.606712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:36.058 [2024-12-10 14:27:36.606720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:36.058 [2024-12-10 14:27:36.606725] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:36.058 [2024-12-10 14:27:36.606731] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:36.058 [2024-12-10 14:27:36.606735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:36.058 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.058 [2024-12-10 14:27:36.616466] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:36.058 [2024-12-10 14:27:36.616476] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:36.058 [2024-12-10 14:27:36.616480] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:36.058 [2024-12-10 14:27:36.616484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:36.058 [2024-12-10 14:27:36.616497] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:36.058 [2024-12-10 14:27:36.616657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-12-10 14:27:36.616669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124a710 with addr=10.0.0.2, port=4420 00:25:36.058 [2024-12-10 14:27:36.616676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124a710 is same with the state(6) to be set 00:25:36.058 [2024-12-10 14:27:36.616687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124a710 (9): Bad file descriptor 00:25:36.058 [2024-12-10 14:27:36.616702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:36.058 [2024-12-10 14:27:36.616709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:36.058 [2024-12-10 14:27:36.616716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:36.058 [2024-12-10 14:27:36.616721] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:36.058 [2024-12-10 14:27:36.616726] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:36.058 [2024-12-10 14:27:36.616732] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:36.058 [2024-12-10 14:27:36.626528] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:36.058 [2024-12-10 14:27:36.626538] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:36.058 [2024-12-10 14:27:36.626542] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:36.058 [2024-12-10 14:27:36.626546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:36.058 [2024-12-10 14:27:36.626558] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:36.058 [2024-12-10 14:27:36.626657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-12-10 14:27:36.626668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124a710 with addr=10.0.0.2, port=4420 00:25:36.058 [2024-12-10 14:27:36.626675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124a710 is same with the state(6) to be set 00:25:36.058 [2024-12-10 14:27:36.626685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124a710 (9): Bad file descriptor 00:25:36.058 [2024-12-10 14:27:36.626694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:36.058 [2024-12-10 14:27:36.626700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:36.058 [2024-12-10 14:27:36.626706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:36.058 [2024-12-10 14:27:36.626711] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:36.058 [2024-12-10 14:27:36.626715] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:36.059 [2024-12-10 14:27:36.626719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:36.059 [2024-12-10 14:27:36.636590] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:36.059 [2024-12-10 14:27:36.636603] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:36.059 [2024-12-10 14:27:36.636608] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:36.059 [2024-12-10 14:27:36.636612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:36.059 [2024-12-10 14:27:36.636626] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:36.059 [2024-12-10 14:27:36.636820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-12-10 14:27:36.636833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124a710 with addr=10.0.0.2, port=4420 00:25:36.059 [2024-12-10 14:27:36.636840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124a710 is same with the state(6) to be set 00:25:36.059 [2024-12-10 14:27:36.636851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124a710 (9): Bad file descriptor 00:25:36.059 [2024-12-10 14:27:36.636874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:36.059 [2024-12-10 14:27:36.636881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:36.059 [2024-12-10 14:27:36.636888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:36.059 [2024-12-10 14:27:36.636894] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:36.059 [2024-12-10 14:27:36.636904] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:36.059 [2024-12-10 14:27:36.636908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:36.059 [2024-12-10 14:27:36.646657] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:36.059 [2024-12-10 14:27:36.646670] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:36.059 [2024-12-10 14:27:36.646674] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:36.059 [2024-12-10 14:27:36.646678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:36.059 [2024-12-10 14:27:36.646692] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:36.059 [2024-12-10 14:27:36.646798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-12-10 14:27:36.646810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124a710 with addr=10.0.0.2, port=4420 00:25:36.059 [2024-12-10 14:27:36.646817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124a710 is same with the state(6) to be set 00:25:36.059 [2024-12-10 14:27:36.646827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124a710 (9): Bad file descriptor 00:25:36.059 [2024-12-10 14:27:36.646837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:36.059 [2024-12-10 14:27:36.646843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:36.059 [2024-12-10 14:27:36.646849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:36.059 [2024-12-10 14:27:36.646855] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:36.059 [2024-12-10 14:27:36.646859] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:36.059 [2024-12-10 14:27:36.646863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:36.059 [2024-12-10 14:27:36.656723] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:36.059 [2024-12-10 14:27:36.656737] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:36.059 [2024-12-10 14:27:36.656741] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:36.059 [2024-12-10 14:27:36.656745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:36.059 [2024-12-10 14:27:36.656758] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:36.059 [2024-12-10 14:27:36.656924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-12-10 14:27:36.656934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124a710 with addr=10.0.0.2, port=4420 00:25:36.059 [2024-12-10 14:27:36.656942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124a710 is same with the state(6) to be set 00:25:36.059 [2024-12-10 14:27:36.656952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124a710 (9): Bad file descriptor 00:25:36.059 [2024-12-10 14:27:36.656967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:36.059 [2024-12-10 14:27:36.656973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:36.059 [2024-12-10 14:27:36.656980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:36.059 [2024-12-10 14:27:36.656985] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:36.059 [2024-12-10 14:27:36.656989] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:36.059 [2024-12-10 14:27:36.656993] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:36.059 [2024-12-10 14:27:36.666788] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:36.059 [2024-12-10 14:27:36.666798] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:36.059 [2024-12-10 14:27:36.666802] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:36.059 [2024-12-10 14:27:36.666806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:36.059 [2024-12-10 14:27:36.666818] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:36.059 [2024-12-10 14:27:36.666997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-12-10 14:27:36.667015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124a710 with addr=10.0.0.2, port=4420 00:25:36.059 [2024-12-10 14:27:36.667022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124a710 is same with the state(6) to be set 00:25:36.059 [2024-12-10 14:27:36.667031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124a710 (9): Bad file descriptor 00:25:36.059 [2024-12-10 14:27:36.667041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:36.059 [2024-12-10 14:27:36.667046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:36.059 [2024-12-10 14:27:36.667053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:36.059 [2024-12-10 14:27:36.667058] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:36.059 [2024-12-10 14:27:36.667062] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:36.059 [2024-12-10 14:27:36.667066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:36.059 [2024-12-10 14:27:36.672596] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:36.059 [2024-12-10 14:27:36.672614] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.059 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.060 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.319 14:27:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.256 [2024-12-10 14:27:37.988377] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:37.256 [2024-12-10 14:27:37.988394] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:37.256 [2024-12-10 14:27:37.988405] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:37.514 [2024-12-10 14:27:38.074658] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:37.514 [2024-12-10 14:27:38.174274] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:37.514 [2024-12-10 14:27:38.174719] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x12480c0:1 started. 00:25:37.514 [2024-12-10 14:27:38.176258] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:37.514 [2024-12-10 14:27:38.176282] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:37.514 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.514 [2024-12-10 14:27:38.177592] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x12480c0 was disconnected and freed. delete nvme_qpair. 00:25:37.514 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.514 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:37.514 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.514 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:37.514 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.514 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:37.514 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.514 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.514 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.514 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.514 request: 00:25:37.514 { 00:25:37.514 "name": "nvme", 00:25:37.514 "trtype": "tcp", 00:25:37.514 "traddr": "10.0.0.2", 00:25:37.514 "adrfam": "ipv4", 00:25:37.514 "trsvcid": "8009", 00:25:37.514 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:37.514 "wait_for_attach": true, 00:25:37.514 "method": "bdev_nvme_start_discovery", 00:25:37.514 "req_id": 1 00:25:37.514 } 00:25:37.514 Got JSON-RPC error response 00:25:37.515 response: 00:25:37.515 { 00:25:37.515 "code": -17, 00:25:37.515 "message": "File exists" 00:25:37.515 } 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.515 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:37.773 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.773 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:37.773 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.773 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:37.773 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.773 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:37.773 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.773 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:37.773 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.773 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.773 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.773 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.773 request: 00:25:37.773 { 00:25:37.773 "name": "nvme_second", 00:25:37.773 "trtype": "tcp", 00:25:37.773 "traddr": "10.0.0.2", 00:25:37.773 "adrfam": "ipv4", 00:25:37.773 "trsvcid": "8009", 00:25:37.773 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:37.773 "wait_for_attach": true, 00:25:37.773 "method": "bdev_nvme_start_discovery", 00:25:37.773 "req_id": 1 00:25:37.773 } 00:25:37.773 Got JSON-RPC error response 00:25:37.773 response: 00:25:37.773 { 00:25:37.773 "code": -17, 00:25:37.773 "message": "File exists" 00:25:37.773 } 00:25:37.773 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:37.773 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.774 14:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.709 [2024-12-10 14:27:39.411679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.710 [2024-12-10 14:27:39.411706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b1f90 with addr=10.0.0.2, port=8010 00:25:38.710 [2024-12-10 14:27:39.411720] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:38.710 [2024-12-10 14:27:39.411728] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:38.710 [2024-12-10 14:27:39.411737] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:40.086 [2024-12-10 14:27:40.414131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.086 [2024-12-10 14:27:40.414157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b1f90 with addr=10.0.0.2, port=8010 00:25:40.086 [2024-12-10 14:27:40.414172] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:40.086 [2024-12-10 14:27:40.414178] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:40.086 [2024-12-10 14:27:40.414185] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:41.023 [2024-12-10 14:27:41.416306] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:41.023 request: 00:25:41.023 { 00:25:41.023 "name": "nvme_second", 00:25:41.023 "trtype": "tcp", 00:25:41.023 "traddr": "10.0.0.2", 00:25:41.023 "adrfam": "ipv4", 00:25:41.023 "trsvcid": "8010", 00:25:41.023 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:41.023 "wait_for_attach": false, 00:25:41.023 "attach_timeout_ms": 3000, 00:25:41.023 "method": "bdev_nvme_start_discovery", 00:25:41.023 "req_id": 1 00:25:41.023 } 00:25:41.023 Got JSON-RPC error response 00:25:41.023 response: 00:25:41.023 { 00:25:41.023 "code": -110, 00:25:41.023 "message": "Connection timed out" 00:25:41.023 } 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1756673 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:41.023 rmmod nvme_tcp 00:25:41.023 rmmod nvme_fabrics 00:25:41.023 rmmod nvme_keyring 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1756654 ']' 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1756654 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1756654 ']' 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1756654 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1756654 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:41.023 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:41.024 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1756654' 00:25:41.024 killing process with pid 1756654 00:25:41.024 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1756654 00:25:41.024 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1756654 00:25:41.024 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:41.024 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:41.024 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:41.024 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:41.024 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:41.024 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:41.024 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:41.024 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:41.024 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:41.024 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.024 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.024 14:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.560 14:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:43.560 00:25:43.560 real 0m17.917s 00:25:43.560 user 0m20.538s 00:25:43.560 sys 0m6.356s 00:25:43.560 14:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:43.560 14:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.560 ************************************ 00:25:43.560 END TEST nvmf_host_discovery 00:25:43.560 ************************************ 00:25:43.560 14:27:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:43.560 14:27:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:43.560 14:27:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:43.560 14:27:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.560 ************************************ 00:25:43.560 START TEST nvmf_host_multipath_status 00:25:43.560 ************************************ 00:25:43.560 14:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:43.560 * Looking for test storage... 00:25:43.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:43.560 14:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:43.560 14:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:25:43.560 14:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:43.560 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:43.560 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:43.560 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:43.560 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:43.560 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:43.560 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:43.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.561 --rc genhtml_branch_coverage=1 00:25:43.561 --rc genhtml_function_coverage=1 00:25:43.561 --rc genhtml_legend=1 00:25:43.561 --rc geninfo_all_blocks=1 00:25:43.561 --rc geninfo_unexecuted_blocks=1 00:25:43.561 00:25:43.561 ' 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:43.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.561 --rc genhtml_branch_coverage=1 00:25:43.561 --rc genhtml_function_coverage=1 00:25:43.561 --rc genhtml_legend=1 00:25:43.561 --rc geninfo_all_blocks=1 00:25:43.561 --rc geninfo_unexecuted_blocks=1 00:25:43.561 00:25:43.561 ' 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:43.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.561 --rc genhtml_branch_coverage=1 00:25:43.561 --rc genhtml_function_coverage=1 00:25:43.561 --rc genhtml_legend=1 00:25:43.561 --rc geninfo_all_blocks=1 00:25:43.561 --rc geninfo_unexecuted_blocks=1 00:25:43.561 00:25:43.561 ' 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:43.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.561 --rc genhtml_branch_coverage=1 00:25:43.561 --rc genhtml_function_coverage=1 00:25:43.561 --rc genhtml_legend=1 00:25:43.561 --rc geninfo_all_blocks=1 00:25:43.561 --rc geninfo_unexecuted_blocks=1 00:25:43.561 00:25:43.561 ' 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:43.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:43.561 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.562 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.562 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.562 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:43.562 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:43.562 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:43.562 14:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:50.136 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:50.136 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.136 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:50.137 Found net devices under 0000:af:00.0: cvl_0_0 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:50.137 Found net devices under 0000:af:00.1: cvl_0_1 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:50.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:25:50.137 00:25:50.137 --- 10.0.0.2 ping statistics --- 00:25:50.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.137 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:25:50.137 00:25:50.137 --- 10.0.0.1 ping statistics --- 00:25:50.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.137 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1762009 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1762009 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1762009 ']' 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.137 14:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:50.137 [2024-12-10 14:27:50.862976] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:25:50.137 [2024-12-10 14:27:50.863023] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.397 [2024-12-10 14:27:50.946192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:50.397 [2024-12-10 14:27:50.986036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.397 [2024-12-10 14:27:50.986071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.397 [2024-12-10 14:27:50.986078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.397 [2024-12-10 14:27:50.986084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.397 [2024-12-10 14:27:50.986089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.397 [2024-12-10 14:27:50.987211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.397 [2024-12-10 14:27:50.987212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.397 14:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.397 14:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:50.397 14:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.397 14:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.397 14:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:50.397 14:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.397 14:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1762009 00:25:50.397 14:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:50.657 [2024-12-10 14:27:51.296510] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.657 14:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:50.915 Malloc0 00:25:50.915 14:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:51.174 14:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:51.433 14:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.433 [2024-12-10 14:27:52.092948] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.433 14:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:51.692 [2024-12-10 14:27:52.285445] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:51.692 14:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1762286 00:25:51.692 14:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:51.692 14:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:51.692 14:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1762286 /var/tmp/bdevperf.sock 00:25:51.692 14:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1762286 ']' 00:25:51.692 14:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:51.692 14:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:51.692 14:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:51.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:51.692 14:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:51.692 14:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:51.950 14:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:51.950 14:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:51.950 14:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:52.209 14:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:52.777 Nvme0n1 00:25:52.777 14:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:53.038 Nvme0n1 00:25:53.038 14:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:53.038 14:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:54.942 14:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:54.942 14:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:55.200 14:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:55.459 14:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:56.396 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:56.396 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:56.396 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.396 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:56.655 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.655 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:56.655 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.655 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:56.914 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.914 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:56.914 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.914 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:57.173 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.173 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:57.173 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.173 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:57.173 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.173 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:57.173 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.173 14:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:57.432 14:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.432 14:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:57.432 14:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.432 14:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:57.690 14:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.690 14:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:57.690 14:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:57.947 14:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:58.206 14:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:59.140 14:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:59.140 14:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:59.140 14:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.140 14:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:59.398 14:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:59.398 14:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:59.398 14:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.398 14:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:59.657 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.657 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:59.657 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.657 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:59.657 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.657 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:59.657 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.657 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:59.916 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.916 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:59.916 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.916 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:00.175 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.175 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:00.175 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.175 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:00.434 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.434 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:00.434 14:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:00.434 14:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:00.692 14:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:02.071 14:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:02.071 14:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:02.071 14:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.071 14:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:02.071 14:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.071 14:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:02.071 14:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.071 14:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:02.071 14:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:02.071 14:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:02.071 14:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.071 14:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:02.329 14:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.330 14:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:02.330 14:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:02.330 14:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.589 14:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.589 14:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:02.589 14:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.589 14:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:02.847 14:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.847 14:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:02.847 14:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.847 14:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:03.107 14:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.107 14:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:03.107 14:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:03.366 14:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:03.366 14:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:04.743 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:04.743 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:04.743 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:04.743 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.743 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.743 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:04.743 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.743 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:04.743 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.743 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:04.743 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.743 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.001 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.001 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.001 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:05.001 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.260 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.260 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:05.260 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:05.260 14:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.519 14:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.519 14:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:05.519 14:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.519 14:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:05.778 14:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.778 14:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:05.778 14:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:06.036 14:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:06.036 14:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:07.411 14:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:07.411 14:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:07.411 14:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.411 14:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:07.411 14:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.411 14:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:07.411 14:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.411 14:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.411 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.669 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.669 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.669 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.669 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.669 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:07.670 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.670 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:07.928 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.928 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:07.928 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:07.928 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.187 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.187 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:08.187 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.187 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.445 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.445 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:08.445 14:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:08.445 14:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:08.704 14:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:09.641 14:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:09.641 14:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:09.641 14:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.641 14:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:09.899 14:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:09.899 14:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:09.899 14:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.899 14:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.158 14:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.158 14:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.158 14:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.158 14:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:10.417 14:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.417 14:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:10.417 14:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.417 14:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:10.676 14:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.676 14:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:10.676 14:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.676 14:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:10.676 14:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.676 14:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:10.676 14:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.676 14:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:10.935 14:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.935 14:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:11.194 14:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:11.194 14:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:11.453 14:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:11.711 14:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:12.648 14:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:12.648 14:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:12.648 14:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.648 14:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:12.907 14:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.907 14:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:12.907 14:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.907 14:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:13.166 14:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.166 14:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:13.166 14:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.166 14:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.166 14:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.166 14:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:13.166 14:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.166 14:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.425 14:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.425 14:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:13.425 14:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.425 14:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:13.683 14:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.683 14:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:13.683 14:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.683 14:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:13.942 14:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.942 14:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:13.942 14:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:14.201 14:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:14.201 14:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:15.581 14:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:15.581 14:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:15.581 14:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.581 14:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.581 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.581 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:15.581 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.581 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.840 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.840 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.840 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.840 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.840 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.840 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:15.840 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.840 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:16.099 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.099 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:16.099 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.099 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:16.358 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.358 14:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:16.358 14:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.358 14:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:16.616 14:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.616 14:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:16.616 14:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:16.875 14:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:17.134 14:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:18.070 14:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:18.070 14:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:18.070 14:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.070 14:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:18.328 14:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.328 14:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:18.329 14:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.329 14:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:18.587 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.587 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:18.587 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.587 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:18.587 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.587 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:18.587 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.587 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:18.846 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.846 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:18.846 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.846 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:19.105 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.105 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:19.105 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.105 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:19.364 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.364 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:19.364 14:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:19.622 14:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:19.622 14:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:20.994 14:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:20.994 14:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:20.994 14:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.994 14:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:20.994 14:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.994 14:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:20.994 14:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.994 14:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:21.252 14:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:21.252 14:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:21.252 14:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.252 14:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:21.252 14:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.253 14:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:21.253 14:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.253 14:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:21.511 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.511 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:21.511 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.511 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:21.769 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.769 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:21.769 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.769 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:22.028 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.028 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1762286 00:26:22.028 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1762286 ']' 00:26:22.028 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1762286 00:26:22.028 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:22.028 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:22.028 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1762286 00:26:22.028 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:22.028 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:22.028 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1762286' 00:26:22.028 killing process with pid 1762286 00:26:22.028 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1762286 00:26:22.028 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1762286 00:26:22.028 { 00:26:22.028 "results": [ 00:26:22.028 { 00:26:22.028 "job": "Nvme0n1", 00:26:22.028 "core_mask": "0x4", 00:26:22.028 "workload": "verify", 00:26:22.028 "status": "terminated", 00:26:22.028 "verify_range": { 00:26:22.028 "start": 0, 00:26:22.028 "length": 16384 00:26:22.028 }, 00:26:22.028 "queue_depth": 128, 00:26:22.028 "io_size": 4096, 00:26:22.028 "runtime": 28.870527, 00:26:22.028 "iops": 10692.911840507795, 00:26:22.028 "mibps": 41.769186876983575, 00:26:22.028 "io_failed": 0, 00:26:22.028 "io_timeout": 0, 00:26:22.028 "avg_latency_us": 11950.12265541246, 00:26:22.028 "min_latency_us": 245.76, 00:26:22.028 "max_latency_us": 3019898.88 00:26:22.028 } 00:26:22.028 ], 00:26:22.028 "core_count": 1 00:26:22.028 } 00:26:22.290 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1762286 00:26:22.290 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:22.290 [2024-12-10 14:27:52.362235] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:26:22.290 [2024-12-10 14:27:52.362292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762286 ] 00:26:22.290 [2024-12-10 14:27:52.443861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.290 [2024-12-10 14:27:52.483381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:22.290 Running I/O for 90 seconds... 00:26:22.290 11442.00 IOPS, 44.70 MiB/s [2024-12-10T13:28:23.030Z] 11520.50 IOPS, 45.00 MiB/s [2024-12-10T13:28:23.030Z] 11473.67 IOPS, 44.82 MiB/s [2024-12-10T13:28:23.030Z] 11503.75 IOPS, 44.94 MiB/s [2024-12-10T13:28:23.030Z] 11486.60 IOPS, 44.87 MiB/s [2024-12-10T13:28:23.030Z] 11491.50 IOPS, 44.89 MiB/s [2024-12-10T13:28:23.030Z] 11484.29 IOPS, 44.86 MiB/s [2024-12-10T13:28:23.030Z] 11487.50 IOPS, 44.87 MiB/s [2024-12-10T13:28:23.030Z] 11482.00 IOPS, 44.85 MiB/s [2024-12-10T13:28:23.030Z] 11489.80 IOPS, 44.88 MiB/s [2024-12-10T13:28:23.030Z] 11493.82 IOPS, 44.90 MiB/s [2024-12-10T13:28:23.030Z] 11493.33 IOPS, 44.90 MiB/s [2024-12-10T13:28:23.030Z] [2024-12-10 14:28:06.500801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.290 [2024-12-10 14:28:06.500840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:22.290 [2024-12-10 14:28:06.500874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.290 [2024-12-10 14:28:06.500883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:22.290 [2024-12-10 14:28:06.500896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.290 [2024-12-10 14:28:06.500903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:22.290 [2024-12-10 14:28:06.500916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.290 [2024-12-10 14:28:06.500923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:22.290 [2024-12-10 14:28:06.500936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:127968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.290 [2024-12-10 14:28:06.500943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:22.290 [2024-12-10 14:28:06.500955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.290 [2024-12-10 14:28:06.500962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.500974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.500981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.500992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.500999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.501012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.501019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.501031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.501045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.501058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.501065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.501077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.501084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.501096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.501103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.501115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.501122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.501135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.501142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.501197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.501206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.501225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.501233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.501247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.501254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.501267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.501274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.501287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.501294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.501307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.501314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.501326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.501335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.501349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.501356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.502100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.502121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.502141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.502162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.502181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.502201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.502227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.291 [2024-12-10 14:28:06.502254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.502274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.502294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.502314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.502336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.502357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.502377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.502398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.502417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.502437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.502458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.502478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.502497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.502518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.502537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:22.291 [2024-12-10 14:28:06.502550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.291 [2024-12-10 14:28:06.502557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.502984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.502990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:127480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.292 [2024-12-10 14:28:06.503466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:22.292 [2024-12-10 14:28:06.503755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.292 [2024-12-10 14:28:06.503761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.503777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.503784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.503800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.503806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.503822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.293 [2024-12-10 14:28:06.503828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.503843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.503851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.503867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.503873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.503889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.503895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.503911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.503918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.503933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.503940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.503955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.503963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.503980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.503986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:127832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.293 [2024-12-10 14:28:06.504742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:22.293 [2024-12-10 14:28:06.504759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.294 [2024-12-10 14:28:06.504766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:06.504783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.294 [2024-12-10 14:28:06.504790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:06.504808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.294 [2024-12-10 14:28:06.504815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:06.504832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.294 [2024-12-10 14:28:06.504839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:22.294 11304.77 IOPS, 44.16 MiB/s [2024-12-10T13:28:23.034Z] 10497.29 IOPS, 41.01 MiB/s [2024-12-10T13:28:23.034Z] 9797.47 IOPS, 38.27 MiB/s [2024-12-10T13:28:23.034Z] 9340.56 IOPS, 36.49 MiB/s [2024-12-10T13:28:23.034Z] 9470.18 IOPS, 36.99 MiB/s [2024-12-10T13:28:23.034Z] 9577.78 IOPS, 37.41 MiB/s [2024-12-10T13:28:23.034Z] 9742.32 IOPS, 38.06 MiB/s [2024-12-10T13:28:23.034Z] 9939.70 IOPS, 38.83 MiB/s [2024-12-10T13:28:23.034Z] 10130.33 IOPS, 39.57 MiB/s [2024-12-10T13:28:23.034Z] 10196.14 IOPS, 39.83 MiB/s [2024-12-10T13:28:23.034Z] 10257.04 IOPS, 40.07 MiB/s [2024-12-10T13:28:23.034Z] 10320.04 IOPS, 40.31 MiB/s [2024-12-10T13:28:23.034Z] 10446.76 IOPS, 40.81 MiB/s [2024-12-10T13:28:23.034Z] 10563.27 IOPS, 41.26 MiB/s [2024-12-10T13:28:23.034Z] [2024-12-10 14:28:20.302514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:28832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:28864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:28880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.302986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.302998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.303005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.303017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.303024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.303036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.303043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.303055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.294 [2024-12-10 14:28:20.303062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.303074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.294 [2024-12-10 14:28:20.303082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.303094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.294 [2024-12-10 14:28:20.303101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.303113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.294 [2024-12-10 14:28:20.303120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.303132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.303139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.303152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.303159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.303171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.303178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.303190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.294 [2024-12-10 14:28:20.303197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:22.294 [2024-12-10 14:28:20.303209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.303215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.303239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.303258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.303683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.303705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.303727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.303746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.303765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.303784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.303803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.303822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.303841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.295 [2024-12-10 14:28:20.303860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.295 [2024-12-10 14:28:20.303879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.295 [2024-12-10 14:28:20.303898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.303918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.303936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.303955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.303976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.303988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.303994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.304006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.304013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.304026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.304033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.304045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.304051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.304064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.304070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.304083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.304089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.304102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.304108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.304120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.295 [2024-12-10 14:28:20.304127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.305011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.305028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.305043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.305050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.305063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.305070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.305085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.295 [2024-12-10 14:28:20.305092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.305104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.295 [2024-12-10 14:28:20.305111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.305123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.295 [2024-12-10 14:28:20.305129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.305142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.295 [2024-12-10 14:28:20.305150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:22.295 [2024-12-10 14:28:20.305162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.296 [2024-12-10 14:28:20.305168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:22.296 [2024-12-10 14:28:20.305180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.296 [2024-12-10 14:28:20.305187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:22.296 [2024-12-10 14:28:20.305199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.296 [2024-12-10 14:28:20.305206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:22.296 [2024-12-10 14:28:20.305225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.296 [2024-12-10 14:28:20.305232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:22.296 [2024-12-10 14:28:20.305244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.296 [2024-12-10 14:28:20.305251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:22.296 [2024-12-10 14:28:20.305263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.296 [2024-12-10 14:28:20.305270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.296 [2024-12-10 14:28:20.305282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.296 [2024-12-10 14:28:20.305289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:22.296 [2024-12-10 14:28:20.305300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.296 [2024-12-10 14:28:20.305307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:22.296 [2024-12-10 14:28:20.305319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.296 [2024-12-10 14:28:20.305334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.296 [2024-12-10 14:28:20.305347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.296 [2024-12-10 14:28:20.305353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.296 10636.22 IOPS, 41.55 MiB/s [2024-12-10T13:28:23.036Z] 10665.75 IOPS, 41.66 MiB/s [2024-12-10T13:28:23.036Z] Received shutdown signal, test time was about 28.871146 seconds 00:26:22.296 00:26:22.296 Latency(us) 00:26:22.296 [2024-12-10T13:28:23.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.296 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:22.296 Verification LBA range: start 0x0 length 0x4000 00:26:22.296 Nvme0n1 : 28.87 10692.91 41.77 0.00 0.00 11950.12 245.76 3019898.88 00:26:22.296 [2024-12-10T13:28:23.036Z] =================================================================================================================== 00:26:22.296 [2024-12-10T13:28:23.036Z] Total : 10692.91 41.77 0.00 0.00 11950.12 245.76 3019898.88 00:26:22.296 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:22.296 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:22.296 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:22.296 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:22.296 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:22.296 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:22.296 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:22.296 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:22.296 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:22.296 14:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:22.296 rmmod nvme_tcp 00:26:22.296 rmmod nvme_fabrics 00:26:22.296 rmmod nvme_keyring 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1762009 ']' 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1762009 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1762009 ']' 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1762009 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1762009 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1762009' 00:26:22.555 killing process with pid 1762009 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1762009 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1762009 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.555 14:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:25.193 00:26:25.193 real 0m41.461s 00:26:25.193 user 1m50.528s 00:26:25.193 sys 0m12.265s 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:25.193 ************************************ 00:26:25.193 END TEST nvmf_host_multipath_status 00:26:25.193 ************************************ 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.193 ************************************ 00:26:25.193 START TEST nvmf_discovery_remove_ifc 00:26:25.193 ************************************ 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:25.193 * Looking for test storage... 00:26:25.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:25.193 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:25.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.193 --rc genhtml_branch_coverage=1 00:26:25.193 --rc genhtml_function_coverage=1 00:26:25.193 --rc genhtml_legend=1 00:26:25.193 --rc geninfo_all_blocks=1 00:26:25.193 --rc geninfo_unexecuted_blocks=1 00:26:25.193 00:26:25.194 ' 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:25.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.194 --rc genhtml_branch_coverage=1 00:26:25.194 --rc genhtml_function_coverage=1 00:26:25.194 --rc genhtml_legend=1 00:26:25.194 --rc geninfo_all_blocks=1 00:26:25.194 --rc geninfo_unexecuted_blocks=1 00:26:25.194 00:26:25.194 ' 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:25.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.194 --rc genhtml_branch_coverage=1 00:26:25.194 --rc genhtml_function_coverage=1 00:26:25.194 --rc genhtml_legend=1 00:26:25.194 --rc geninfo_all_blocks=1 00:26:25.194 --rc geninfo_unexecuted_blocks=1 00:26:25.194 00:26:25.194 ' 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:25.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.194 --rc genhtml_branch_coverage=1 00:26:25.194 --rc genhtml_function_coverage=1 00:26:25.194 --rc genhtml_legend=1 00:26:25.194 --rc geninfo_all_blocks=1 00:26:25.194 --rc geninfo_unexecuted_blocks=1 00:26:25.194 00:26:25.194 ' 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:25.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:25.194 14:28:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:31.780 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:31.780 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:31.780 Found net devices under 0000:af:00.0: cvl_0_0 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:31.780 Found net devices under 0000:af:00.1: cvl_0_1 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:31.780 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:31.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:26:31.781 00:26:31.781 --- 10.0.0.2 ping statistics --- 00:26:31.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.781 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:31.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:26:31.781 00:26:31.781 --- 10.0.0.1 ping statistics --- 00:26:31.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.781 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1771419 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1771419 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1771419 ']' 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.781 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.781 [2024-12-10 14:28:32.467658] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:26:31.781 [2024-12-10 14:28:32.467700] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.040 [2024-12-10 14:28:32.549436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.040 [2024-12-10 14:28:32.588210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.040 [2024-12-10 14:28:32.588248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.040 [2024-12-10 14:28:32.588255] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.040 [2024-12-10 14:28:32.588262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.040 [2024-12-10 14:28:32.588267] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.040 [2024-12-10 14:28:32.588800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.040 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.040 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:32.040 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:32.040 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:32.040 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.040 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.040 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:32.040 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.040 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.040 [2024-12-10 14:28:32.731646] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.040 [2024-12-10 14:28:32.739791] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:32.040 null0 00:26:32.040 [2024-12-10 14:28:32.771797] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.299 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.299 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1771442 00:26:32.299 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:32.299 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1771442 /tmp/host.sock 00:26:32.299 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1771442 ']' 00:26:32.299 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:32.299 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.299 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:32.299 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:32.299 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.299 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.299 [2024-12-10 14:28:32.839126] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:26:32.299 [2024-12-10 14:28:32.839166] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1771442 ] 00:26:32.299 [2024-12-10 14:28:32.918507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.299 [2024-12-10 14:28:32.959305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.299 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.299 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:32.299 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:32.299 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:32.299 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.299 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.299 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.300 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:32.300 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.300 14:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.558 14:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.558 14:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:32.558 14:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.558 14:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.494 [2024-12-10 14:28:34.128377] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:33.494 [2024-12-10 14:28:34.128397] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:33.494 [2024-12-10 14:28:34.128410] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:33.494 [2024-12-10 14:28:34.214661] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:33.753 [2024-12-10 14:28:34.430744] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:33.753 [2024-12-10 14:28:34.431428] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x120d210:1 started. 00:26:33.753 [2024-12-10 14:28:34.432725] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:33.753 [2024-12-10 14:28:34.432765] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:33.753 [2024-12-10 14:28:34.432783] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:33.753 [2024-12-10 14:28:34.432796] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:33.753 [2024-12-10 14:28:34.432814] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:33.753 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.753 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:33.753 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.753 [2024-12-10 14:28:34.437281] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x120d210 was disconnected and freed. delete nvme_qpair. 00:26:33.753 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.753 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.753 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.753 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.753 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.753 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.753 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.753 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:33.753 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:33.753 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:34.012 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:34.012 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:34.012 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.012 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:34.012 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.012 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.012 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.012 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:34.012 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.012 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:34.012 14:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:34.946 14:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:34.946 14:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.946 14:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:34.946 14:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.946 14:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.946 14:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.946 14:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:34.946 14:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.205 14:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:35.205 14:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:36.141 14:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:36.141 14:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.141 14:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:36.141 14:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.141 14:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:36.141 14:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.141 14:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:36.141 14:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.141 14:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:36.141 14:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:37.076 14:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.076 14:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.076 14:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.076 14:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.076 14:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.076 14:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.076 14:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.076 14:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.076 14:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:37.076 14:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:38.452 14:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.452 14:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.452 14:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.452 14:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.452 14:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.452 14:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.452 14:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.452 14:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.452 14:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:38.452 14:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:39.387 14:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:39.387 14:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.387 14:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:39.387 14:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.387 14:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:39.387 14:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.387 14:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.387 [2024-12-10 14:28:39.874373] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:39.387 [2024-12-10 14:28:39.874408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.387 [2024-12-10 14:28:39.874419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.387 [2024-12-10 14:28:39.874427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.387 [2024-12-10 14:28:39.874434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.387 [2024-12-10 14:28:39.874441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.387 [2024-12-10 14:28:39.874451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.387 [2024-12-10 14:28:39.874458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.387 [2024-12-10 14:28:39.874465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.387 [2024-12-10 14:28:39.874473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.387 [2024-12-10 14:28:39.874479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.387 [2024-12-10 14:28:39.874486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e9a10 is same with the state(6) to be set 00:26:39.387 14:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.387 [2024-12-10 14:28:39.884396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e9a10 (9): Bad file descriptor 00:26:39.387 [2024-12-10 14:28:39.894433] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:39.387 [2024-12-10 14:28:39.894443] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:39.387 [2024-12-10 14:28:39.894449] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:39.387 [2024-12-10 14:28:39.894454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:39.387 [2024-12-10 14:28:39.894472] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:39.387 14:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:39.387 14:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.323 14:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.323 14:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.323 14:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.323 14:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.323 14:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.323 14:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.323 14:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.323 [2024-12-10 14:28:40.922252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:40.323 [2024-12-10 14:28:40.922323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e9a10 with addr=10.0.0.2, port=4420 00:26:40.323 [2024-12-10 14:28:40.922355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e9a10 is same with the state(6) to be set 00:26:40.323 [2024-12-10 14:28:40.922410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e9a10 (9): Bad file descriptor 00:26:40.323 [2024-12-10 14:28:40.923360] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:40.323 [2024-12-10 14:28:40.923423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:40.323 [2024-12-10 14:28:40.923446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:40.323 [2024-12-10 14:28:40.923469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:40.323 [2024-12-10 14:28:40.923498] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:40.323 [2024-12-10 14:28:40.923515] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:40.323 [2024-12-10 14:28:40.923529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:40.323 [2024-12-10 14:28:40.923551] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:40.323 [2024-12-10 14:28:40.923565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:40.323 14:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.323 14:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:40.323 14:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:41.258 [2024-12-10 14:28:41.926080] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:41.258 [2024-12-10 14:28:41.926100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:41.258 [2024-12-10 14:28:41.926111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:41.258 [2024-12-10 14:28:41.926117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:41.258 [2024-12-10 14:28:41.926124] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:41.258 [2024-12-10 14:28:41.926131] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:41.258 [2024-12-10 14:28:41.926151] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:41.258 [2024-12-10 14:28:41.926155] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:41.258 [2024-12-10 14:28:41.926175] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:41.258 [2024-12-10 14:28:41.926195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.258 [2024-12-10 14:28:41.926204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.258 [2024-12-10 14:28:41.926213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.258 [2024-12-10 14:28:41.926225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.258 [2024-12-10 14:28:41.926232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.258 [2024-12-10 14:28:41.926239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.259 [2024-12-10 14:28:41.926246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.259 [2024-12-10 14:28:41.926253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.259 [2024-12-10 14:28:41.926260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.259 [2024-12-10 14:28:41.926266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.259 [2024-12-10 14:28:41.926273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:41.259 [2024-12-10 14:28:41.926583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d8d20 (9): Bad file descriptor 00:26:41.259 [2024-12-10 14:28:41.927594] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:41.259 [2024-12-10 14:28:41.927605] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:41.259 14:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.259 14:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.259 14:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.259 14:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.259 14:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.259 14:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.259 14:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.259 14:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.517 14:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:41.517 14:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.517 14:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.517 14:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:41.517 14:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.517 14:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.517 14:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.517 14:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.517 14:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.517 14:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.517 14:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.517 14:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.517 14:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:41.517 14:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:42.453 14:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.453 14:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.453 14:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.453 14:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.453 14:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.453 14:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.453 14:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.453 14:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.453 14:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:42.453 14:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:43.388 [2024-12-10 14:28:43.983684] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:43.388 [2024-12-10 14:28:43.983700] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:43.388 [2024-12-10 14:28:43.983712] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:43.389 [2024-12-10 14:28:44.111108] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:43.647 14:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.647 14:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.647 14:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.647 14:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.647 14:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.647 14:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.647 14:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.647 14:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.647 14:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:43.647 14:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:43.647 [2024-12-10 14:28:44.293990] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:43.647 [2024-12-10 14:28:44.294585] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1216980:1 started. 00:26:43.647 [2024-12-10 14:28:44.295573] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:43.647 [2024-12-10 14:28:44.295602] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:43.647 [2024-12-10 14:28:44.295619] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:43.647 [2024-12-10 14:28:44.295631] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:43.647 [2024-12-10 14:28:44.295638] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:43.647 [2024-12-10 14:28:44.302571] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1216980 was disconnected and freed. delete nvme_qpair. 00:26:44.583 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:44.583 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.583 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:44.583 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.583 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:44.583 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.583 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:44.583 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.583 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:44.583 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:44.583 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1771442 00:26:44.583 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1771442 ']' 00:26:44.583 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1771442 00:26:44.583 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:44.583 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:44.583 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1771442 00:26:44.841 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:44.841 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:44.841 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1771442' 00:26:44.841 killing process with pid 1771442 00:26:44.841 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1771442 00:26:44.841 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1771442 00:26:44.841 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:44.841 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:44.841 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:44.841 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:44.841 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:44.841 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:44.841 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:44.841 rmmod nvme_tcp 00:26:44.841 rmmod nvme_fabrics 00:26:44.841 rmmod nvme_keyring 00:26:44.841 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:44.842 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:44.842 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:44.842 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1771419 ']' 00:26:44.842 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1771419 00:26:44.842 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1771419 ']' 00:26:44.842 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1771419 00:26:44.842 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:44.842 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:44.842 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1771419 00:26:45.101 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:45.101 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:45.101 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1771419' 00:26:45.101 killing process with pid 1771419 00:26:45.101 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1771419 00:26:45.101 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1771419 00:26:45.101 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:45.101 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:45.101 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:45.101 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:45.101 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:45.101 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:45.101 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:45.101 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:45.101 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:45.101 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.101 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.101 14:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.636 14:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:47.636 00:26:47.636 real 0m22.412s 00:26:47.636 user 0m27.096s 00:26:47.636 sys 0m6.550s 00:26:47.636 14:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:47.636 14:28:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.636 ************************************ 00:26:47.636 END TEST nvmf_discovery_remove_ifc 00:26:47.636 ************************************ 00:26:47.637 14:28:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:47.637 14:28:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:47.637 14:28:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:47.637 14:28:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.637 ************************************ 00:26:47.637 START TEST nvmf_identify_kernel_target 00:26:47.637 ************************************ 00:26:47.637 14:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:47.637 * Looking for test storage... 00:26:47.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:47.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.637 --rc genhtml_branch_coverage=1 00:26:47.637 --rc genhtml_function_coverage=1 00:26:47.637 --rc genhtml_legend=1 00:26:47.637 --rc geninfo_all_blocks=1 00:26:47.637 --rc geninfo_unexecuted_blocks=1 00:26:47.637 00:26:47.637 ' 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:47.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.637 --rc genhtml_branch_coverage=1 00:26:47.637 --rc genhtml_function_coverage=1 00:26:47.637 --rc genhtml_legend=1 00:26:47.637 --rc geninfo_all_blocks=1 00:26:47.637 --rc geninfo_unexecuted_blocks=1 00:26:47.637 00:26:47.637 ' 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:47.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.637 --rc genhtml_branch_coverage=1 00:26:47.637 --rc genhtml_function_coverage=1 00:26:47.637 --rc genhtml_legend=1 00:26:47.637 --rc geninfo_all_blocks=1 00:26:47.637 --rc geninfo_unexecuted_blocks=1 00:26:47.637 00:26:47.637 ' 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:47.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.637 --rc genhtml_branch_coverage=1 00:26:47.637 --rc genhtml_function_coverage=1 00:26:47.637 --rc genhtml_legend=1 00:26:47.637 --rc geninfo_all_blocks=1 00:26:47.637 --rc geninfo_unexecuted_blocks=1 00:26:47.637 00:26:47.637 ' 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:47.637 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:47.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:47.638 14:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:54.203 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:54.203 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:54.203 Found net devices under 0000:af:00.0: cvl_0_0 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:54.203 Found net devices under 0000:af:00.1: cvl_0_1 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.203 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:54.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:26:54.204 00:26:54.204 --- 10.0.0.2 ping statistics --- 00:26:54.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.204 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:26:54.204 00:26:54.204 --- 10.0.0.1 ping statistics --- 00:26:54.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.204 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:54.204 14:28:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:57.489 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:26:57.489 Waiting for block devices as requested 00:26:57.489 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:57.748 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:57.748 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:57.748 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:57.748 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:58.007 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:58.007 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:58.007 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:58.266 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:58.266 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:58.266 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:58.524 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:58.524 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:58.524 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:58.524 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:58.783 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:58.783 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:58.783 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:58.783 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:58.783 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:58.783 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:58.783 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:58.783 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:58.783 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:58.783 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:58.783 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:58.783 No valid GPT data, bailing 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:26:59.042 No valid GPT data, bailing 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # continue 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:59.042 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:59.043 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:59.043 00:26:59.043 Discovery Log Number of Records 2, Generation counter 2 00:26:59.043 =====Discovery Log Entry 0====== 00:26:59.043 trtype: tcp 00:26:59.043 adrfam: ipv4 00:26:59.043 subtype: current discovery subsystem 00:26:59.043 treq: not specified, sq flow control disable supported 00:26:59.043 portid: 1 00:26:59.043 trsvcid: 4420 00:26:59.043 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:59.043 traddr: 10.0.0.1 00:26:59.043 eflags: none 00:26:59.043 sectype: none 00:26:59.043 =====Discovery Log Entry 1====== 00:26:59.043 trtype: tcp 00:26:59.043 adrfam: ipv4 00:26:59.043 subtype: nvme subsystem 00:26:59.043 treq: not specified, sq flow control disable supported 00:26:59.043 portid: 1 00:26:59.043 trsvcid: 4420 00:26:59.043 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:59.043 traddr: 10.0.0.1 00:26:59.043 eflags: none 00:26:59.043 sectype: none 00:26:59.043 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:59.043 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:59.303 ===================================================== 00:26:59.303 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:59.303 ===================================================== 00:26:59.303 Controller Capabilities/Features 00:26:59.303 ================================ 00:26:59.303 Vendor ID: 0000 00:26:59.303 Subsystem Vendor ID: 0000 00:26:59.303 Serial Number: babcd52ee96550b589a7 00:26:59.303 Model Number: Linux 00:26:59.303 Firmware Version: 6.8.9-20 00:26:59.303 Recommended Arb Burst: 0 00:26:59.303 IEEE OUI Identifier: 00 00 00 00:26:59.303 Multi-path I/O 00:26:59.303 May have multiple subsystem ports: No 00:26:59.303 May have multiple controllers: No 00:26:59.303 Associated with SR-IOV VF: No 00:26:59.303 Max Data Transfer Size: Unlimited 00:26:59.303 Max Number of Namespaces: 0 00:26:59.303 Max Number of I/O Queues: 1024 00:26:59.303 NVMe Specification Version (VS): 1.3 00:26:59.303 NVMe Specification Version (Identify): 1.3 00:26:59.303 Maximum Queue Entries: 1024 00:26:59.303 Contiguous Queues Required: No 00:26:59.303 Arbitration Mechanisms Supported 00:26:59.303 Weighted Round Robin: Not Supported 00:26:59.303 Vendor Specific: Not Supported 00:26:59.303 Reset Timeout: 7500 ms 00:26:59.303 Doorbell Stride: 4 bytes 00:26:59.303 NVM Subsystem Reset: Not Supported 00:26:59.303 Command Sets Supported 00:26:59.303 NVM Command Set: Supported 00:26:59.303 Boot Partition: Not Supported 00:26:59.303 Memory Page Size Minimum: 4096 bytes 00:26:59.303 Memory Page Size Maximum: 4096 bytes 00:26:59.303 Persistent Memory Region: Not Supported 00:26:59.303 Optional Asynchronous Events Supported 00:26:59.303 Namespace Attribute Notices: Not Supported 00:26:59.303 Firmware Activation Notices: Not Supported 00:26:59.303 ANA Change Notices: Not Supported 00:26:59.303 PLE Aggregate Log Change Notices: Not Supported 00:26:59.303 LBA Status Info Alert Notices: Not Supported 00:26:59.303 EGE Aggregate Log Change Notices: Not Supported 00:26:59.303 Normal NVM Subsystem Shutdown event: Not Supported 00:26:59.303 Zone Descriptor Change Notices: Not Supported 00:26:59.303 Discovery Log Change Notices: Supported 00:26:59.303 Controller Attributes 00:26:59.303 128-bit Host Identifier: Not Supported 00:26:59.303 Non-Operational Permissive Mode: Not Supported 00:26:59.303 NVM Sets: Not Supported 00:26:59.303 Read Recovery Levels: Not Supported 00:26:59.303 Endurance Groups: Not Supported 00:26:59.303 Predictable Latency Mode: Not Supported 00:26:59.303 Traffic Based Keep ALive: Not Supported 00:26:59.303 Namespace Granularity: Not Supported 00:26:59.303 SQ Associations: Not Supported 00:26:59.303 UUID List: Not Supported 00:26:59.303 Multi-Domain Subsystem: Not Supported 00:26:59.303 Fixed Capacity Management: Not Supported 00:26:59.303 Variable Capacity Management: Not Supported 00:26:59.303 Delete Endurance Group: Not Supported 00:26:59.303 Delete NVM Set: Not Supported 00:26:59.303 Extended LBA Formats Supported: Not Supported 00:26:59.303 Flexible Data Placement Supported: Not Supported 00:26:59.303 00:26:59.303 Controller Memory Buffer Support 00:26:59.303 ================================ 00:26:59.303 Supported: No 00:26:59.303 00:26:59.303 Persistent Memory Region Support 00:26:59.303 ================================ 00:26:59.303 Supported: No 00:26:59.303 00:26:59.303 Admin Command Set Attributes 00:26:59.303 ============================ 00:26:59.303 Security Send/Receive: Not Supported 00:26:59.303 Format NVM: Not Supported 00:26:59.303 Firmware Activate/Download: Not Supported 00:26:59.303 Namespace Management: Not Supported 00:26:59.303 Device Self-Test: Not Supported 00:26:59.303 Directives: Not Supported 00:26:59.303 NVMe-MI: Not Supported 00:26:59.303 Virtualization Management: Not Supported 00:26:59.303 Doorbell Buffer Config: Not Supported 00:26:59.303 Get LBA Status Capability: Not Supported 00:26:59.303 Command & Feature Lockdown Capability: Not Supported 00:26:59.303 Abort Command Limit: 1 00:26:59.303 Async Event Request Limit: 1 00:26:59.303 Number of Firmware Slots: N/A 00:26:59.303 Firmware Slot 1 Read-Only: N/A 00:26:59.303 Firmware Activation Without Reset: N/A 00:26:59.303 Multiple Update Detection Support: N/A 00:26:59.303 Firmware Update Granularity: No Information Provided 00:26:59.303 Per-Namespace SMART Log: No 00:26:59.303 Asymmetric Namespace Access Log Page: Not Supported 00:26:59.303 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:59.303 Command Effects Log Page: Not Supported 00:26:59.303 Get Log Page Extended Data: Supported 00:26:59.303 Telemetry Log Pages: Not Supported 00:26:59.303 Persistent Event Log Pages: Not Supported 00:26:59.303 Supported Log Pages Log Page: May Support 00:26:59.303 Commands Supported & Effects Log Page: Not Supported 00:26:59.303 Feature Identifiers & Effects Log Page:May Support 00:26:59.303 NVMe-MI Commands & Effects Log Page: May Support 00:26:59.303 Data Area 4 for Telemetry Log: Not Supported 00:26:59.303 Error Log Page Entries Supported: 1 00:26:59.303 Keep Alive: Not Supported 00:26:59.303 00:26:59.303 NVM Command Set Attributes 00:26:59.303 ========================== 00:26:59.303 Submission Queue Entry Size 00:26:59.303 Max: 1 00:26:59.303 Min: 1 00:26:59.303 Completion Queue Entry Size 00:26:59.303 Max: 1 00:26:59.303 Min: 1 00:26:59.303 Number of Namespaces: 0 00:26:59.303 Compare Command: Not Supported 00:26:59.303 Write Uncorrectable Command: Not Supported 00:26:59.303 Dataset Management Command: Not Supported 00:26:59.303 Write Zeroes Command: Not Supported 00:26:59.303 Set Features Save Field: Not Supported 00:26:59.303 Reservations: Not Supported 00:26:59.303 Timestamp: Not Supported 00:26:59.303 Copy: Not Supported 00:26:59.303 Volatile Write Cache: Not Present 00:26:59.303 Atomic Write Unit (Normal): 1 00:26:59.303 Atomic Write Unit (PFail): 1 00:26:59.303 Atomic Compare & Write Unit: 1 00:26:59.303 Fused Compare & Write: Not Supported 00:26:59.303 Scatter-Gather List 00:26:59.303 SGL Command Set: Supported 00:26:59.303 SGL Keyed: Not Supported 00:26:59.303 SGL Bit Bucket Descriptor: Not Supported 00:26:59.303 SGL Metadata Pointer: Not Supported 00:26:59.303 Oversized SGL: Not Supported 00:26:59.303 SGL Metadata Address: Not Supported 00:26:59.303 SGL Offset: Supported 00:26:59.303 Transport SGL Data Block: Not Supported 00:26:59.303 Replay Protected Memory Block: Not Supported 00:26:59.303 00:26:59.303 Firmware Slot Information 00:26:59.303 ========================= 00:26:59.303 Active slot: 0 00:26:59.303 00:26:59.303 00:26:59.303 Error Log 00:26:59.303 ========= 00:26:59.303 00:26:59.303 Active Namespaces 00:26:59.303 ================= 00:26:59.303 Discovery Log Page 00:26:59.303 ================== 00:26:59.303 Generation Counter: 2 00:26:59.303 Number of Records: 2 00:26:59.303 Record Format: 0 00:26:59.303 00:26:59.303 Discovery Log Entry 0 00:26:59.303 ---------------------- 00:26:59.303 Transport Type: 3 (TCP) 00:26:59.303 Address Family: 1 (IPv4) 00:26:59.303 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:59.303 Entry Flags: 00:26:59.303 Duplicate Returned Information: 0 00:26:59.303 Explicit Persistent Connection Support for Discovery: 0 00:26:59.303 Transport Requirements: 00:26:59.303 Secure Channel: Not Specified 00:26:59.303 Port ID: 1 (0x0001) 00:26:59.303 Controller ID: 65535 (0xffff) 00:26:59.303 Admin Max SQ Size: 32 00:26:59.303 Transport Service Identifier: 4420 00:26:59.303 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:59.303 Transport Address: 10.0.0.1 00:26:59.303 Discovery Log Entry 1 00:26:59.303 ---------------------- 00:26:59.303 Transport Type: 3 (TCP) 00:26:59.303 Address Family: 1 (IPv4) 00:26:59.303 Subsystem Type: 2 (NVM Subsystem) 00:26:59.303 Entry Flags: 00:26:59.303 Duplicate Returned Information: 0 00:26:59.303 Explicit Persistent Connection Support for Discovery: 0 00:26:59.303 Transport Requirements: 00:26:59.303 Secure Channel: Not Specified 00:26:59.303 Port ID: 1 (0x0001) 00:26:59.303 Controller ID: 65535 (0xffff) 00:26:59.303 Admin Max SQ Size: 32 00:26:59.303 Transport Service Identifier: 4420 00:26:59.303 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:59.303 Transport Address: 10.0.0.1 00:26:59.303 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:59.303 get_feature(0x01) failed 00:26:59.304 get_feature(0x02) failed 00:26:59.304 get_feature(0x04) failed 00:26:59.304 ===================================================== 00:26:59.304 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:59.304 ===================================================== 00:26:59.304 Controller Capabilities/Features 00:26:59.304 ================================ 00:26:59.304 Vendor ID: 0000 00:26:59.304 Subsystem Vendor ID: 0000 00:26:59.304 Serial Number: 8c92ec728c20946fbe4c 00:26:59.304 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:59.304 Firmware Version: 6.8.9-20 00:26:59.304 Recommended Arb Burst: 6 00:26:59.304 IEEE OUI Identifier: 00 00 00 00:26:59.304 Multi-path I/O 00:26:59.304 May have multiple subsystem ports: Yes 00:26:59.304 May have multiple controllers: Yes 00:26:59.304 Associated with SR-IOV VF: No 00:26:59.304 Max Data Transfer Size: Unlimited 00:26:59.304 Max Number of Namespaces: 1024 00:26:59.304 Max Number of I/O Queues: 128 00:26:59.304 NVMe Specification Version (VS): 1.3 00:26:59.304 NVMe Specification Version (Identify): 1.3 00:26:59.304 Maximum Queue Entries: 1024 00:26:59.304 Contiguous Queues Required: No 00:26:59.304 Arbitration Mechanisms Supported 00:26:59.304 Weighted Round Robin: Not Supported 00:26:59.304 Vendor Specific: Not Supported 00:26:59.304 Reset Timeout: 7500 ms 00:26:59.304 Doorbell Stride: 4 bytes 00:26:59.304 NVM Subsystem Reset: Not Supported 00:26:59.304 Command Sets Supported 00:26:59.304 NVM Command Set: Supported 00:26:59.304 Boot Partition: Not Supported 00:26:59.304 Memory Page Size Minimum: 4096 bytes 00:26:59.304 Memory Page Size Maximum: 4096 bytes 00:26:59.304 Persistent Memory Region: Not Supported 00:26:59.304 Optional Asynchronous Events Supported 00:26:59.304 Namespace Attribute Notices: Supported 00:26:59.304 Firmware Activation Notices: Not Supported 00:26:59.304 ANA Change Notices: Supported 00:26:59.304 PLE Aggregate Log Change Notices: Not Supported 00:26:59.304 LBA Status Info Alert Notices: Not Supported 00:26:59.304 EGE Aggregate Log Change Notices: Not Supported 00:26:59.304 Normal NVM Subsystem Shutdown event: Not Supported 00:26:59.304 Zone Descriptor Change Notices: Not Supported 00:26:59.304 Discovery Log Change Notices: Not Supported 00:26:59.304 Controller Attributes 00:26:59.304 128-bit Host Identifier: Supported 00:26:59.304 Non-Operational Permissive Mode: Not Supported 00:26:59.304 NVM Sets: Not Supported 00:26:59.304 Read Recovery Levels: Not Supported 00:26:59.304 Endurance Groups: Not Supported 00:26:59.304 Predictable Latency Mode: Not Supported 00:26:59.304 Traffic Based Keep ALive: Supported 00:26:59.304 Namespace Granularity: Not Supported 00:26:59.304 SQ Associations: Not Supported 00:26:59.304 UUID List: Not Supported 00:26:59.304 Multi-Domain Subsystem: Not Supported 00:26:59.304 Fixed Capacity Management: Not Supported 00:26:59.304 Variable Capacity Management: Not Supported 00:26:59.304 Delete Endurance Group: Not Supported 00:26:59.304 Delete NVM Set: Not Supported 00:26:59.304 Extended LBA Formats Supported: Not Supported 00:26:59.304 Flexible Data Placement Supported: Not Supported 00:26:59.304 00:26:59.304 Controller Memory Buffer Support 00:26:59.304 ================================ 00:26:59.304 Supported: No 00:26:59.304 00:26:59.304 Persistent Memory Region Support 00:26:59.304 ================================ 00:26:59.304 Supported: No 00:26:59.304 00:26:59.304 Admin Command Set Attributes 00:26:59.304 ============================ 00:26:59.304 Security Send/Receive: Not Supported 00:26:59.304 Format NVM: Not Supported 00:26:59.304 Firmware Activate/Download: Not Supported 00:26:59.304 Namespace Management: Not Supported 00:26:59.304 Device Self-Test: Not Supported 00:26:59.304 Directives: Not Supported 00:26:59.304 NVMe-MI: Not Supported 00:26:59.304 Virtualization Management: Not Supported 00:26:59.304 Doorbell Buffer Config: Not Supported 00:26:59.304 Get LBA Status Capability: Not Supported 00:26:59.304 Command & Feature Lockdown Capability: Not Supported 00:26:59.304 Abort Command Limit: 4 00:26:59.304 Async Event Request Limit: 4 00:26:59.304 Number of Firmware Slots: N/A 00:26:59.304 Firmware Slot 1 Read-Only: N/A 00:26:59.304 Firmware Activation Without Reset: N/A 00:26:59.304 Multiple Update Detection Support: N/A 00:26:59.304 Firmware Update Granularity: No Information Provided 00:26:59.304 Per-Namespace SMART Log: Yes 00:26:59.304 Asymmetric Namespace Access Log Page: Supported 00:26:59.304 ANA Transition Time : 10 sec 00:26:59.304 00:26:59.304 Asymmetric Namespace Access Capabilities 00:26:59.304 ANA Optimized State : Supported 00:26:59.304 ANA Non-Optimized State : Supported 00:26:59.304 ANA Inaccessible State : Supported 00:26:59.304 ANA Persistent Loss State : Supported 00:26:59.304 ANA Change State : Supported 00:26:59.304 ANAGRPID is not changed : No 00:26:59.304 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:59.304 00:26:59.304 ANA Group Identifier Maximum : 128 00:26:59.304 Number of ANA Group Identifiers : 128 00:26:59.304 Max Number of Allowed Namespaces : 1024 00:26:59.304 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:59.304 Command Effects Log Page: Supported 00:26:59.304 Get Log Page Extended Data: Supported 00:26:59.304 Telemetry Log Pages: Not Supported 00:26:59.304 Persistent Event Log Pages: Not Supported 00:26:59.304 Supported Log Pages Log Page: May Support 00:26:59.304 Commands Supported & Effects Log Page: Not Supported 00:26:59.304 Feature Identifiers & Effects Log Page:May Support 00:26:59.304 NVMe-MI Commands & Effects Log Page: May Support 00:26:59.304 Data Area 4 for Telemetry Log: Not Supported 00:26:59.304 Error Log Page Entries Supported: 128 00:26:59.304 Keep Alive: Supported 00:26:59.304 Keep Alive Granularity: 1000 ms 00:26:59.304 00:26:59.304 NVM Command Set Attributes 00:26:59.304 ========================== 00:26:59.304 Submission Queue Entry Size 00:26:59.304 Max: 64 00:26:59.304 Min: 64 00:26:59.304 Completion Queue Entry Size 00:26:59.304 Max: 16 00:26:59.304 Min: 16 00:26:59.304 Number of Namespaces: 1024 00:26:59.304 Compare Command: Not Supported 00:26:59.304 Write Uncorrectable Command: Not Supported 00:26:59.304 Dataset Management Command: Supported 00:26:59.304 Write Zeroes Command: Supported 00:26:59.304 Set Features Save Field: Not Supported 00:26:59.304 Reservations: Not Supported 00:26:59.304 Timestamp: Not Supported 00:26:59.304 Copy: Not Supported 00:26:59.304 Volatile Write Cache: Present 00:26:59.304 Atomic Write Unit (Normal): 1 00:26:59.304 Atomic Write Unit (PFail): 1 00:26:59.304 Atomic Compare & Write Unit: 1 00:26:59.304 Fused Compare & Write: Not Supported 00:26:59.304 Scatter-Gather List 00:26:59.304 SGL Command Set: Supported 00:26:59.304 SGL Keyed: Not Supported 00:26:59.304 SGL Bit Bucket Descriptor: Not Supported 00:26:59.304 SGL Metadata Pointer: Not Supported 00:26:59.304 Oversized SGL: Not Supported 00:26:59.304 SGL Metadata Address: Not Supported 00:26:59.304 SGL Offset: Supported 00:26:59.304 Transport SGL Data Block: Not Supported 00:26:59.304 Replay Protected Memory Block: Not Supported 00:26:59.304 00:26:59.304 Firmware Slot Information 00:26:59.304 ========================= 00:26:59.304 Active slot: 0 00:26:59.304 00:26:59.304 Asymmetric Namespace Access 00:26:59.304 =========================== 00:26:59.304 Change Count : 0 00:26:59.304 Number of ANA Group Descriptors : 1 00:26:59.304 ANA Group Descriptor : 0 00:26:59.304 ANA Group ID : 1 00:26:59.304 Number of NSID Values : 1 00:26:59.304 Change Count : 0 00:26:59.304 ANA State : 1 00:26:59.304 Namespace Identifier : 1 00:26:59.304 00:26:59.304 Commands Supported and Effects 00:26:59.304 ============================== 00:26:59.304 Admin Commands 00:26:59.304 -------------- 00:26:59.304 Get Log Page (02h): Supported 00:26:59.304 Identify (06h): Supported 00:26:59.304 Abort (08h): Supported 00:26:59.304 Set Features (09h): Supported 00:26:59.304 Get Features (0Ah): Supported 00:26:59.304 Asynchronous Event Request (0Ch): Supported 00:26:59.304 Keep Alive (18h): Supported 00:26:59.304 I/O Commands 00:26:59.304 ------------ 00:26:59.304 Flush (00h): Supported 00:26:59.304 Write (01h): Supported LBA-Change 00:26:59.304 Read (02h): Supported 00:26:59.304 Write Zeroes (08h): Supported LBA-Change 00:26:59.304 Dataset Management (09h): Supported 00:26:59.304 00:26:59.304 Error Log 00:26:59.304 ========= 00:26:59.304 Entry: 0 00:26:59.304 Error Count: 0x3 00:26:59.304 Submission Queue Id: 0x0 00:26:59.304 Command Id: 0x5 00:26:59.304 Phase Bit: 0 00:26:59.304 Status Code: 0x2 00:26:59.304 Status Code Type: 0x0 00:26:59.304 Do Not Retry: 1 00:26:59.304 Error Location: 0x28 00:26:59.304 LBA: 0x0 00:26:59.304 Namespace: 0x0 00:26:59.304 Vendor Log Page: 0x0 00:26:59.304 ----------- 00:26:59.304 Entry: 1 00:26:59.304 Error Count: 0x2 00:26:59.304 Submission Queue Id: 0x0 00:26:59.304 Command Id: 0x5 00:26:59.304 Phase Bit: 0 00:26:59.304 Status Code: 0x2 00:26:59.304 Status Code Type: 0x0 00:26:59.304 Do Not Retry: 1 00:26:59.304 Error Location: 0x28 00:26:59.305 LBA: 0x0 00:26:59.305 Namespace: 0x0 00:26:59.305 Vendor Log Page: 0x0 00:26:59.305 ----------- 00:26:59.305 Entry: 2 00:26:59.305 Error Count: 0x1 00:26:59.305 Submission Queue Id: 0x0 00:26:59.305 Command Id: 0x4 00:26:59.305 Phase Bit: 0 00:26:59.305 Status Code: 0x2 00:26:59.305 Status Code Type: 0x0 00:26:59.305 Do Not Retry: 1 00:26:59.305 Error Location: 0x28 00:26:59.305 LBA: 0x0 00:26:59.305 Namespace: 0x0 00:26:59.305 Vendor Log Page: 0x0 00:26:59.305 00:26:59.305 Number of Queues 00:26:59.305 ================ 00:26:59.305 Number of I/O Submission Queues: 128 00:26:59.305 Number of I/O Completion Queues: 128 00:26:59.305 00:26:59.305 ZNS Specific Controller Data 00:26:59.305 ============================ 00:26:59.305 Zone Append Size Limit: 0 00:26:59.305 00:26:59.305 00:26:59.305 Active Namespaces 00:26:59.305 ================= 00:26:59.305 get_feature(0x05) failed 00:26:59.305 Namespace ID:1 00:26:59.305 Command Set Identifier: NVM (00h) 00:26:59.305 Deallocate: Supported 00:26:59.305 Deallocated/Unwritten Error: Not Supported 00:26:59.305 Deallocated Read Value: Unknown 00:26:59.305 Deallocate in Write Zeroes: Not Supported 00:26:59.305 Deallocated Guard Field: 0xFFFF 00:26:59.305 Flush: Supported 00:26:59.305 Reservation: Not Supported 00:26:59.305 Namespace Sharing Capabilities: Multiple Controllers 00:26:59.305 Size (in LBAs): 4194304 (2GiB) 00:26:59.305 Capacity (in LBAs): 4194304 (2GiB) 00:26:59.305 Utilization (in LBAs): 4194304 (2GiB) 00:26:59.305 UUID: cb05757f-f830-441f-8f27-17340ed62c77 00:26:59.305 Thin Provisioning: Not Supported 00:26:59.305 Per-NS Atomic Units: Yes 00:26:59.305 Atomic Boundary Size (Normal): 0 00:26:59.305 Atomic Boundary Size (PFail): 0 00:26:59.305 Atomic Boundary Offset: 0 00:26:59.305 NGUID/EUI64 Never Reused: No 00:26:59.305 ANA group ID: 1 00:26:59.305 Namespace Write Protected: No 00:26:59.305 Number of LBA Formats: 1 00:26:59.305 Current LBA Format: LBA Format #00 00:26:59.305 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:59.305 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:59.305 rmmod nvme_tcp 00:26:59.305 rmmod nvme_fabrics 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.305 14:28:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.839 14:29:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:01.839 14:29:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:01.839 14:29:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:01.839 14:29:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:01.839 14:29:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:01.839 14:29:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:01.839 14:29:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:01.839 14:29:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:01.839 14:29:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:01.839 14:29:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:01.839 14:29:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:04.372 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:27:04.939 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:04.939 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:04.939 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:04.939 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:04.939 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:04.939 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:04.939 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:04.939 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:04.939 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:04.939 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:04.939 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:04.939 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:04.939 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:04.939 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:04.939 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:04.939 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:05.876 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:05.876 00:27:05.876 real 0m18.610s 00:27:05.876 user 0m4.981s 00:27:05.876 sys 0m9.982s 00:27:05.876 14:29:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:05.876 14:29:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:05.876 ************************************ 00:27:05.876 END TEST nvmf_identify_kernel_target 00:27:05.876 ************************************ 00:27:05.876 14:29:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:05.876 14:29:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:05.876 14:29:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:05.876 14:29:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.876 ************************************ 00:27:05.876 START TEST nvmf_auth_host 00:27:05.876 ************************************ 00:27:05.876 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:06.136 * Looking for test storage... 00:27:06.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:06.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.136 --rc genhtml_branch_coverage=1 00:27:06.136 --rc genhtml_function_coverage=1 00:27:06.136 --rc genhtml_legend=1 00:27:06.136 --rc geninfo_all_blocks=1 00:27:06.136 --rc geninfo_unexecuted_blocks=1 00:27:06.136 00:27:06.136 ' 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:06.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.136 --rc genhtml_branch_coverage=1 00:27:06.136 --rc genhtml_function_coverage=1 00:27:06.136 --rc genhtml_legend=1 00:27:06.136 --rc geninfo_all_blocks=1 00:27:06.136 --rc geninfo_unexecuted_blocks=1 00:27:06.136 00:27:06.136 ' 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:06.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.136 --rc genhtml_branch_coverage=1 00:27:06.136 --rc genhtml_function_coverage=1 00:27:06.136 --rc genhtml_legend=1 00:27:06.136 --rc geninfo_all_blocks=1 00:27:06.136 --rc geninfo_unexecuted_blocks=1 00:27:06.136 00:27:06.136 ' 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:06.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.136 --rc genhtml_branch_coverage=1 00:27:06.136 --rc genhtml_function_coverage=1 00:27:06.136 --rc genhtml_legend=1 00:27:06.136 --rc geninfo_all_blocks=1 00:27:06.136 --rc geninfo_unexecuted_blocks=1 00:27:06.136 00:27:06.136 ' 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.136 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:06.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:06.137 14:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.702 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.702 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:12.702 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:12.702 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:12.702 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:12.702 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:12.702 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:12.702 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:12.702 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:12.703 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:12.703 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:12.703 Found net devices under 0000:af:00.0: cvl_0_0 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:12.703 Found net devices under 0000:af:00.1: cvl_0_1 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.703 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:12.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:12.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:27:12.965 00:27:12.965 --- 10.0.0.2 ping statistics --- 00:27:12.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.965 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:12.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:12.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:27:12.965 00:27:12.965 --- 10.0.0.1 ping statistics --- 00:27:12.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.965 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1784609 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1784609 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1784609 ']' 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:12.965 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e14cab055ccb99857710677327b21ecd 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.M7A 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e14cab055ccb99857710677327b21ecd 0 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e14cab055ccb99857710677327b21ecd 0 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e14cab055ccb99857710677327b21ecd 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:13.224 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.M7A 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.M7A 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.M7A 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5118959aa62bc13b2580ecda6425c0a122224c329afb7db5396c9c9fadb5fff2 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.P96 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5118959aa62bc13b2580ecda6425c0a122224c329afb7db5396c9c9fadb5fff2 3 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5118959aa62bc13b2580ecda6425c0a122224c329afb7db5396c9c9fadb5fff2 3 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5118959aa62bc13b2580ecda6425c0a122224c329afb7db5396c9c9fadb5fff2 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.P96 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.P96 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.P96 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d17fa7ee74388b2b2993abd8e27734a43227eb3245aeb2b6 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.mqO 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d17fa7ee74388b2b2993abd8e27734a43227eb3245aeb2b6 0 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d17fa7ee74388b2b2993abd8e27734a43227eb3245aeb2b6 0 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d17fa7ee74388b2b2993abd8e27734a43227eb3245aeb2b6 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:13.225 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:13.484 14:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.mqO 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.mqO 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.mqO 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a6f490684563e13ac4fc73fcf5a0cdb10c0a363eb3b37f58 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.7bv 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a6f490684563e13ac4fc73fcf5a0cdb10c0a363eb3b37f58 2 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a6f490684563e13ac4fc73fcf5a0cdb10c0a363eb3b37f58 2 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a6f490684563e13ac4fc73fcf5a0cdb10c0a363eb3b37f58 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.7bv 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.7bv 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.7bv 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d61eebe396ac5e2b0afa12f1a2217fa1 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Br5 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d61eebe396ac5e2b0afa12f1a2217fa1 1 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d61eebe396ac5e2b0afa12f1a2217fa1 1 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:13.484 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d61eebe396ac5e2b0afa12f1a2217fa1 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Br5 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Br5 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Br5 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=510062a16778e0bd59d26da021ff7b34 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.tXM 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 510062a16778e0bd59d26da021ff7b34 1 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 510062a16778e0bd59d26da021ff7b34 1 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=510062a16778e0bd59d26da021ff7b34 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.tXM 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.tXM 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.tXM 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d6632f60479234c91f6a772090c47c31c9e27a41915b4992 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.P3a 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d6632f60479234c91f6a772090c47c31c9e27a41915b4992 2 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d6632f60479234c91f6a772090c47c31c9e27a41915b4992 2 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d6632f60479234c91f6a772090c47c31c9e27a41915b4992 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:13.485 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.P3a 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.P3a 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.P3a 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1ce895efd09194081b0bc65e1b4afabb 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Sdd 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1ce895efd09194081b0bc65e1b4afabb 0 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1ce895efd09194081b0bc65e1b4afabb 0 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1ce895efd09194081b0bc65e1b4afabb 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Sdd 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Sdd 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Sdd 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=81e2626121d379e77eb067536334f174e6c28d28a8643d2443411845e12042ef 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.db9 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 81e2626121d379e77eb067536334f174e6c28d28a8643d2443411845e12042ef 3 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 81e2626121d379e77eb067536334f174e6c28d28a8643d2443411845e12042ef 3 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=81e2626121d379e77eb067536334f174e6c28d28a8643d2443411845e12042ef 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.db9 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.db9 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.db9 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1784609 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1784609 ']' 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:13.744 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.M7A 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.P96 ]] 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.P96 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.mqO 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.7bv ]] 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7bv 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Br5 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.tXM ]] 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tXM 00:27:14.003 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.P3a 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Sdd ]] 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Sdd 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.db9 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:14.004 14:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:17.295 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:27:17.295 Waiting for block devices as requested 00:27:17.295 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:27:17.295 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:17.553 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:17.553 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:17.553 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:17.553 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:17.811 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:17.811 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:17.811 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:18.070 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:18.070 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:18.070 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:18.070 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:18.329 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:18.329 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:18.329 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:18.587 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:19.153 No valid GPT data, bailing 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:27:19.153 No valid GPT data, bailing 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:27:19.153 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # continue 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:19.154 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:19.412 00:27:19.412 Discovery Log Number of Records 2, Generation counter 2 00:27:19.412 =====Discovery Log Entry 0====== 00:27:19.412 trtype: tcp 00:27:19.412 adrfam: ipv4 00:27:19.412 subtype: current discovery subsystem 00:27:19.412 treq: not specified, sq flow control disable supported 00:27:19.412 portid: 1 00:27:19.412 trsvcid: 4420 00:27:19.412 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:19.412 traddr: 10.0.0.1 00:27:19.412 eflags: none 00:27:19.412 sectype: none 00:27:19.412 =====Discovery Log Entry 1====== 00:27:19.412 trtype: tcp 00:27:19.412 adrfam: ipv4 00:27:19.412 subtype: nvme subsystem 00:27:19.412 treq: not specified, sq flow control disable supported 00:27:19.412 portid: 1 00:27:19.412 trsvcid: 4420 00:27:19.412 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:19.412 traddr: 10.0.0.1 00:27:19.412 eflags: none 00:27:19.412 sectype: none 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.412 14:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.412 nvme0n1 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: ]] 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.412 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.413 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.413 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.413 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.413 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.413 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.671 nvme0n1 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.671 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.672 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.931 nvme0n1 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: ]] 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.931 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.192 nvme0n1 00:27:20.192 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.192 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: ]] 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.193 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.518 nvme0n1 00:27:20.518 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.518 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.518 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.518 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.518 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.518 14:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.518 nvme0n1 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.518 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: ]] 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.796 nvme0n1 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.796 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.797 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.056 nvme0n1 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: ]] 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.056 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.315 nvme0n1 00:27:21.315 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.315 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.315 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.315 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.315 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.315 14:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: ]] 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.315 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.316 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.316 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.316 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.316 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.316 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.316 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.574 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.574 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.574 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.574 nvme0n1 00:27:21.574 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.574 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.574 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.574 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.574 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.574 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.574 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.574 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.574 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.574 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.574 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.574 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.574 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.575 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.833 nvme0n1 00:27:21.833 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.833 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.833 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.833 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.833 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.833 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.833 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.833 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.833 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.833 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.833 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.833 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: ]] 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.834 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.092 nvme0n1 00:27:22.092 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.092 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.092 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.092 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.092 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.092 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.351 14:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.610 nvme0n1 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:22.610 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: ]] 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.611 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.870 nvme0n1 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: ]] 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.870 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.129 nvme0n1 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:23.129 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.130 14:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.388 nvme0n1 00:27:23.388 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.388 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.388 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.388 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.388 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.389 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.647 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.647 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.647 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.647 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.647 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.647 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.647 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.647 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:23.647 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.647 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.647 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.647 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.647 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:23.647 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:23.647 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.647 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.647 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: ]] 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.648 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.907 nvme0n1 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.907 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.479 nvme0n1 00:27:24.479 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.479 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.479 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.479 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.479 14:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: ]] 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.479 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.738 nvme0n1 00:27:24.738 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.738 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.738 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.738 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.738 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.738 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: ]] 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.997 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.256 nvme0n1 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.256 14:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.824 nvme0n1 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: ]] 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.824 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.393 nvme0n1 00:27:26.393 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.393 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.393 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.393 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.393 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.393 14:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.393 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.961 nvme0n1 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:26.961 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: ]] 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.962 14:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.899 nvme0n1 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: ]] 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.899 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.467 nvme0n1 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.467 14:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.467 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.467 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.467 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.467 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.467 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.467 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.467 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.467 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.467 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.467 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.467 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.467 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.467 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.467 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.035 nvme0n1 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.035 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: ]] 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.036 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.295 nvme0n1 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.295 14:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.295 nvme0n1 00:27:29.295 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.295 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.295 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.295 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.295 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.554 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.554 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.554 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: ]] 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.555 nvme0n1 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.555 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: ]] 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.814 nvme0n1 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.814 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.073 nvme0n1 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:30.073 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: ]] 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.074 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.333 nvme0n1 00:27:30.333 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.333 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.333 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.333 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.333 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.333 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.333 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.333 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.333 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.333 14:29:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.333 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.334 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.593 nvme0n1 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: ]] 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.593 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.852 nvme0n1 00:27:30.852 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.852 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.852 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.852 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.852 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.852 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.852 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.852 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.852 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.852 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.852 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.852 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.852 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:30.852 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.852 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.852 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.852 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: ]] 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.853 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.112 nvme0n1 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.112 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.371 nvme0n1 00:27:31.371 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.371 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.371 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.371 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.371 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.371 14:29:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: ]] 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.371 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.372 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.372 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.372 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.372 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.372 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.631 nvme0n1 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.631 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.890 nvme0n1 00:27:31.890 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.890 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.890 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.890 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.890 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.890 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.149 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.149 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.149 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.149 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.149 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.149 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.149 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:32.149 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.149 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.149 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.149 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.149 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:32.149 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:32.149 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.149 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: ]] 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.150 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.409 nvme0n1 00:27:32.409 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.409 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.409 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.409 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.409 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.409 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.409 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.409 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.409 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.409 14:29:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: ]] 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.409 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 nvme0n1 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.669 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.669 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.669 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.669 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.669 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.669 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.669 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.669 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.669 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.669 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.669 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.669 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.669 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.669 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.928 nvme0n1 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: ]] 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.928 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.190 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.190 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.190 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.190 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.190 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.190 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.190 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.190 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.190 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.190 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.190 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.190 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.190 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.190 14:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.450 nvme0n1 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.451 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.019 nvme0n1 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: ]] 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.019 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.278 nvme0n1 00:27:34.278 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.278 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.278 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.278 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.278 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.278 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.278 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.278 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.278 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.278 14:29:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.278 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.278 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.278 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:34.278 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.278 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.278 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.278 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.278 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:34.278 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:34.278 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.278 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.278 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:34.278 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: ]] 00:27:34.278 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:34.278 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.545 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.546 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.546 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.546 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.546 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.806 nvme0n1 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.806 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.807 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.374 nvme0n1 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: ]] 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.374 14:29:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.942 nvme0n1 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.942 14:29:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.511 nvme0n1 00:27:36.511 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.511 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.511 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.511 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.511 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.511 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.511 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.511 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.511 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.511 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: ]] 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.512 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.080 nvme0n1 00:27:37.080 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.080 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.080 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.080 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.080 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.080 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: ]] 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.339 14:29:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.907 nvme0n1 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.907 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.908 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.908 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.908 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.908 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.908 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.908 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.908 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.908 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.908 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.908 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.908 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.908 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.908 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.908 14:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.477 nvme0n1 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: ]] 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.477 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.736 nvme0n1 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.736 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.737 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.737 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.737 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.737 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.737 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.737 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.737 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.737 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.737 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.737 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.737 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.737 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.996 nvme0n1 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: ]] 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.996 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.997 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.997 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.997 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.997 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.256 nvme0n1 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: ]] 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.256 14:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.515 nvme0n1 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.515 nvme0n1 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.515 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: ]] 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.775 nvme0n1 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.775 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.034 nvme0n1 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.034 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.293 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.293 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.293 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.293 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.293 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: ]] 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.294 14:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.294 nvme0n1 00:27:40.294 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.294 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.294 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.294 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.294 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.294 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: ]] 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.555 nvme0n1 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.555 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.814 nvme0n1 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.814 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: ]] 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.074 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.333 nvme0n1 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.333 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.334 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.334 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.334 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.334 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.334 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.334 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.334 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:41.334 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.334 14:29:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.593 nvme0n1 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: ]] 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.593 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.852 nvme0n1 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: ]] 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.852 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.111 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.111 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.111 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.111 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.111 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.111 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.111 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.111 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.111 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.111 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.111 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.111 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.111 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.111 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.111 nvme0n1 00:27:42.111 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.370 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.371 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.371 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.371 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.371 14:29:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.630 nvme0n1 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: ]] 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.630 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.889 nvme0n1 00:27:42.889 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.889 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.889 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.889 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.889 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.148 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.149 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.149 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.149 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.149 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.149 14:29:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.408 nvme0n1 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:43.408 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: ]] 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.667 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.926 nvme0n1 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: ]] 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.926 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.927 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.927 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.927 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.493 nvme0n1 00:27:44.493 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.493 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.493 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.493 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.493 14:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.493 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.493 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.493 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.493 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.493 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.494 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.752 nvme0n1 00:27:44.752 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.752 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.752 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.752 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.752 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.752 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE0Y2FiMDU1Y2NiOTk4NTc3MTA2NzczMjdiMjFlY2QzxGbB: 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: ]] 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTExODk1OWFhNjJiYzEzYjI1ODBlY2RhNjQyNWMwYTEyMjIyNGMzMjlhZmI3ZGI1Mzk2YzljOWZhZGI1ZmZmMmDd/lo=: 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.011 14:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.579 nvme0n1 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.579 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.147 nvme0n1 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: ]] 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.147 14:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.715 nvme0n1 00:27:46.715 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.715 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.715 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.715 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.715 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.715 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY2MzJmNjA0NzkyMzRjOTFmNmE3NzIwOTBjNDdjMzFjOWUyN2E0MTkxNWI0OTkyKwNLzg==: 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: ]] 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWNlODk1ZWZkMDkxOTQwODFiMGJjNjVlMWI0YWZhYmKZO2iC: 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.974 14:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.542 nvme0n1 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFlMjYyNjEyMWQzNzllNzdlYjA2NzUzNjMzNGYxNzRlNmMyOGQyOGE4NjQzZDI0NDM0MTE4NDVlMTIwNDJlZqfAzEI=: 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.542 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.110 nvme0n1 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.110 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.111 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.111 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.111 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.111 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.111 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.111 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.111 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:48.111 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:48.111 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:48.111 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:48.111 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:48.111 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:48.111 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:48.111 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:48.111 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.111 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.371 request: 00:27:48.371 { 00:27:48.371 "name": "nvme0", 00:27:48.371 "trtype": "tcp", 00:27:48.371 "traddr": "10.0.0.1", 00:27:48.371 "adrfam": "ipv4", 00:27:48.371 "trsvcid": "4420", 00:27:48.371 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:48.371 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:48.371 "prchk_reftag": false, 00:27:48.371 "prchk_guard": false, 00:27:48.371 "hdgst": false, 00:27:48.371 "ddgst": false, 00:27:48.371 "allow_unrecognized_csi": false, 00:27:48.371 "method": "bdev_nvme_attach_controller", 00:27:48.371 "req_id": 1 00:27:48.371 } 00:27:48.371 Got JSON-RPC error response 00:27:48.371 response: 00:27:48.371 { 00:27:48.371 "code": -5, 00:27:48.371 "message": "Input/output error" 00:27:48.371 } 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.371 request: 00:27:48.371 { 00:27:48.371 "name": "nvme0", 00:27:48.371 "trtype": "tcp", 00:27:48.371 "traddr": "10.0.0.1", 00:27:48.371 "adrfam": "ipv4", 00:27:48.371 "trsvcid": "4420", 00:27:48.371 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:48.371 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:48.371 "prchk_reftag": false, 00:27:48.371 "prchk_guard": false, 00:27:48.371 "hdgst": false, 00:27:48.371 "ddgst": false, 00:27:48.371 "dhchap_key": "key2", 00:27:48.371 "allow_unrecognized_csi": false, 00:27:48.371 "method": "bdev_nvme_attach_controller", 00:27:48.371 "req_id": 1 00:27:48.371 } 00:27:48.371 Got JSON-RPC error response 00:27:48.371 response: 00:27:48.371 { 00:27:48.371 "code": -5, 00:27:48.371 "message": "Input/output error" 00:27:48.371 } 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:48.371 14:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.371 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.371 request: 00:27:48.371 { 00:27:48.371 "name": "nvme0", 00:27:48.371 "trtype": "tcp", 00:27:48.371 "traddr": "10.0.0.1", 00:27:48.371 "adrfam": "ipv4", 00:27:48.371 "trsvcid": "4420", 00:27:48.371 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:48.371 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:48.371 "prchk_reftag": false, 00:27:48.371 "prchk_guard": false, 00:27:48.371 "hdgst": false, 00:27:48.371 "ddgst": false, 00:27:48.371 "dhchap_key": "key1", 00:27:48.371 "dhchap_ctrlr_key": "ckey2", 00:27:48.371 "allow_unrecognized_csi": false, 00:27:48.371 "method": "bdev_nvme_attach_controller", 00:27:48.371 "req_id": 1 00:27:48.371 } 00:27:48.371 Got JSON-RPC error response 00:27:48.371 response: 00:27:48.371 { 00:27:48.371 "code": -5, 00:27:48.371 "message": "Input/output error" 00:27:48.371 } 00:27:48.630 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:48.630 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:48.630 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:48.630 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:48.630 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:48.630 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:48.630 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.630 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.630 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.631 nvme0n1 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: ]] 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.631 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.890 request: 00:27:48.890 { 00:27:48.890 "name": "nvme0", 00:27:48.890 "dhchap_key": "key1", 00:27:48.890 "dhchap_ctrlr_key": "ckey2", 00:27:48.890 "method": "bdev_nvme_set_keys", 00:27:48.890 "req_id": 1 00:27:48.890 } 00:27:48.890 Got JSON-RPC error response 00:27:48.890 response: 00:27:48.890 { 00:27:48.890 "code": -13, 00:27:48.890 "message": "Permission denied" 00:27:48.890 } 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:48.890 14:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:49.826 14:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.826 14:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:49.826 14:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.826 14:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.826 14:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.826 14:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:49.826 14:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3ZmE3ZWU3NDM4OGIyYjI5OTNhYmQ4ZTI3NzM0YTQzMjI3ZWIzMjQ1YWViMmI2oL/1sQ==: 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: ]] 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZmNDkwNjg0NTYzZTEzYWM0ZmM3M2ZjZjVhMGNkYjEwYzBhMzYzZWIzYjM3ZjU4+uMp3A==: 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.204 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.205 nvme0n1 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYxZWViZTM5NmFjNWUyYjBhZmExMmYxYTIyMTdmYTEAuy18: 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: ]] 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTEwMDYyYTE2Nzc4ZTBiZDU5ZDI2ZGEwMjFmZjdiMzSnuCR+: 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.205 request: 00:27:51.205 { 00:27:51.205 "name": "nvme0", 00:27:51.205 "dhchap_key": "key2", 00:27:51.205 "dhchap_ctrlr_key": "ckey1", 00:27:51.205 "method": "bdev_nvme_set_keys", 00:27:51.205 "req_id": 1 00:27:51.205 } 00:27:51.205 Got JSON-RPC error response 00:27:51.205 response: 00:27:51.205 { 00:27:51.205 "code": -13, 00:27:51.205 "message": "Permission denied" 00:27:51.205 } 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:51.205 14:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:52.142 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.142 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:52.142 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.142 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:52.401 rmmod nvme_tcp 00:27:52.401 rmmod nvme_fabrics 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1784609 ']' 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1784609 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1784609 ']' 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1784609 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:52.401 14:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1784609 00:27:52.401 14:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:52.401 14:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:52.401 14:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1784609' 00:27:52.401 killing process with pid 1784609 00:27:52.401 14:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1784609 00:27:52.401 14:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1784609 00:27:52.660 14:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:52.660 14:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:52.660 14:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:52.660 14:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:52.660 14:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:52.660 14:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:52.660 14:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:52.660 14:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:52.660 14:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:52.660 14:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.660 14:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.660 14:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.569 14:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:54.569 14:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:54.569 14:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:54.569 14:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:54.569 14:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:54.569 14:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:54.569 14:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:54.569 14:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:54.569 14:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:54.569 14:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:54.569 14:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:54.569 14:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:54.569 14:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:57.862 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:27:57.862 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:57.862 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:57.862 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:58.121 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:58.121 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:58.121 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:58.121 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:58.121 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:58.121 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:58.121 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:58.121 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:58.121 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:58.121 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:58.121 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:58.121 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:58.121 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:59.058 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:59.058 14:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.M7A /tmp/spdk.key-null.mqO /tmp/spdk.key-sha256.Br5 /tmp/spdk.key-sha384.P3a /tmp/spdk.key-sha512.db9 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:59.058 14:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:02.515 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:28:02.515 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:02.515 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:02.515 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:02.515 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:02.515 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:02.515 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:02.515 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:02.515 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:02.515 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:02.515 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:02.515 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:02.515 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:02.515 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:02.515 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:02.515 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:02.515 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:02.515 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:02.515 00:28:02.515 real 0m56.502s 00:28:02.515 user 0m50.442s 00:28:02.515 sys 0m14.172s 00:28:02.515 14:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:02.515 14:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.515 ************************************ 00:28:02.515 END TEST nvmf_auth_host 00:28:02.515 ************************************ 00:28:02.515 14:30:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:02.515 14:30:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:02.515 14:30:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:02.515 14:30:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:02.515 14:30:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.515 ************************************ 00:28:02.515 START TEST nvmf_digest 00:28:02.515 ************************************ 00:28:02.515 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:02.775 * Looking for test storage... 00:28:02.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:02.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.775 --rc genhtml_branch_coverage=1 00:28:02.775 --rc genhtml_function_coverage=1 00:28:02.775 --rc genhtml_legend=1 00:28:02.775 --rc geninfo_all_blocks=1 00:28:02.775 --rc geninfo_unexecuted_blocks=1 00:28:02.775 00:28:02.775 ' 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:02.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.775 --rc genhtml_branch_coverage=1 00:28:02.775 --rc genhtml_function_coverage=1 00:28:02.775 --rc genhtml_legend=1 00:28:02.775 --rc geninfo_all_blocks=1 00:28:02.775 --rc geninfo_unexecuted_blocks=1 00:28:02.775 00:28:02.775 ' 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:02.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.775 --rc genhtml_branch_coverage=1 00:28:02.775 --rc genhtml_function_coverage=1 00:28:02.775 --rc genhtml_legend=1 00:28:02.775 --rc geninfo_all_blocks=1 00:28:02.775 --rc geninfo_unexecuted_blocks=1 00:28:02.775 00:28:02.775 ' 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:02.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.775 --rc genhtml_branch_coverage=1 00:28:02.775 --rc genhtml_function_coverage=1 00:28:02.775 --rc genhtml_legend=1 00:28:02.775 --rc geninfo_all_blocks=1 00:28:02.775 --rc geninfo_unexecuted_blocks=1 00:28:02.775 00:28:02.775 ' 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:02.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:02.775 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:02.776 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:02.776 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:02.776 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:02.776 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:02.776 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:02.776 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:02.776 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:02.776 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:02.776 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.776 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:02.776 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.776 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:02.776 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:02.776 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:02.776 14:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:09.342 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.342 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:09.342 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:09.342 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:09.343 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:09.343 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:09.343 Found net devices under 0000:af:00.0: cvl_0_0 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:09.343 Found net devices under 0000:af:00.1: cvl_0_1 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:09.343 14:30:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.343 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.343 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.343 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:09.343 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:09.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:28:09.602 00:28:09.602 --- 10.0.0.2 ping statistics --- 00:28:09.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.602 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:28:09.602 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:28:09.602 00:28:09.602 --- 10.0.0.1 ping statistics --- 00:28:09.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.602 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:28:09.602 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.602 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:09.602 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:09.602 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.602 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:09.602 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:09.602 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.602 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:09.602 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:09.603 ************************************ 00:28:09.603 START TEST nvmf_digest_clean 00:28:09.603 ************************************ 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1800053 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1800053 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1800053 ']' 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:09.603 14:30:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:09.603 [2024-12-10 14:30:10.209222] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:28:09.603 [2024-12-10 14:30:10.209261] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.603 [2024-12-10 14:30:10.289076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.603 [2024-12-10 14:30:10.339760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.603 [2024-12-10 14:30:10.339803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.603 [2024-12-10 14:30:10.339813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.603 [2024-12-10 14:30:10.339822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.603 [2024-12-10 14:30:10.339828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.603 [2024-12-10 14:30:10.340539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:10.551 null0 00:28:10.551 [2024-12-10 14:30:11.164514] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.551 [2024-12-10 14:30:11.188713] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1800139 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1800139 /var/tmp/bperf.sock 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1800139 ']' 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:10.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:10.551 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:10.551 [2024-12-10 14:30:11.243949] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:28:10.551 [2024-12-10 14:30:11.243990] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1800139 ] 00:28:10.810 [2024-12-10 14:30:11.323280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.810 [2024-12-10 14:30:11.363929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.810 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:10.810 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:10.810 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:10.810 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:10.810 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:11.070 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:11.070 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:11.329 nvme0n1 00:28:11.329 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:11.329 14:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:11.329 Running I/O for 2 seconds... 00:28:13.643 25320.00 IOPS, 98.91 MiB/s [2024-12-10T13:30:14.383Z] 25251.00 IOPS, 98.64 MiB/s 00:28:13.643 Latency(us) 00:28:13.643 [2024-12-10T13:30:14.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.643 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:13.643 nvme0n1 : 2.00 25270.94 98.71 0.00 0.00 5058.45 2293.76 13668.94 00:28:13.643 [2024-12-10T13:30:14.383Z] =================================================================================================================== 00:28:13.643 [2024-12-10T13:30:14.383Z] Total : 25270.94 98.71 0.00 0.00 5058.45 2293.76 13668.94 00:28:13.643 { 00:28:13.643 "results": [ 00:28:13.643 { 00:28:13.643 "job": "nvme0n1", 00:28:13.643 "core_mask": "0x2", 00:28:13.643 "workload": "randread", 00:28:13.643 "status": "finished", 00:28:13.643 "queue_depth": 128, 00:28:13.643 "io_size": 4096, 00:28:13.643 "runtime": 2.004991, 00:28:13.643 "iops": 25270.936378268034, 00:28:13.643 "mibps": 98.71459522760951, 00:28:13.643 "io_failed": 0, 00:28:13.643 "io_timeout": 0, 00:28:13.643 "avg_latency_us": 5058.452136372352, 00:28:13.643 "min_latency_us": 2293.76, 00:28:13.643 "max_latency_us": 13668.937142857143 00:28:13.643 } 00:28:13.643 ], 00:28:13.643 "core_count": 1 00:28:13.643 } 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:13.643 | select(.opcode=="crc32c") 00:28:13.643 | "\(.module_name) \(.executed)"' 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1800139 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1800139 ']' 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1800139 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1800139 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1800139' 00:28:13.643 killing process with pid 1800139 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1800139 00:28:13.643 Received shutdown signal, test time was about 2.000000 seconds 00:28:13.643 00:28:13.643 Latency(us) 00:28:13.643 [2024-12-10T13:30:14.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.643 [2024-12-10T13:30:14.383Z] =================================================================================================================== 00:28:13.643 [2024-12-10T13:30:14.383Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:13.643 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1800139 00:28:13.902 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:13.902 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:13.902 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:13.902 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:13.902 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:13.902 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:13.902 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:13.902 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1800762 00:28:13.902 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1800762 /var/tmp/bperf.sock 00:28:13.902 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:13.902 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1800762 ']' 00:28:13.902 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:13.902 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.902 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:13.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:13.902 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.902 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:13.902 [2024-12-10 14:30:14.514478] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:28:13.902 [2024-12-10 14:30:14.514525] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1800762 ] 00:28:13.902 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:13.902 Zero copy mechanism will not be used. 00:28:13.902 [2024-12-10 14:30:14.596302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.902 [2024-12-10 14:30:14.636635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.161 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.161 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:14.161 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:14.161 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:14.161 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:14.420 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.420 14:30:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.678 nvme0n1 00:28:14.678 14:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:14.678 14:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:14.935 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:14.935 Zero copy mechanism will not be used. 00:28:14.935 Running I/O for 2 seconds... 00:28:16.807 6200.00 IOPS, 775.00 MiB/s [2024-12-10T13:30:17.547Z] 6200.00 IOPS, 775.00 MiB/s 00:28:16.807 Latency(us) 00:28:16.807 [2024-12-10T13:30:17.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.807 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:16.807 nvme0n1 : 2.05 6071.06 758.88 0.00 0.00 2583.58 639.76 45438.29 00:28:16.807 [2024-12-10T13:30:17.547Z] =================================================================================================================== 00:28:16.807 [2024-12-10T13:30:17.547Z] Total : 6071.06 758.88 0.00 0.00 2583.58 639.76 45438.29 00:28:16.807 { 00:28:16.807 "results": [ 00:28:16.807 { 00:28:16.807 "job": "nvme0n1", 00:28:16.807 "core_mask": "0x2", 00:28:16.807 "workload": "randread", 00:28:16.807 "status": "finished", 00:28:16.807 "queue_depth": 16, 00:28:16.807 "io_size": 131072, 00:28:16.807 "runtime": 2.045112, 00:28:16.807 "iops": 6071.061144817497, 00:28:16.807 "mibps": 758.8826431021871, 00:28:16.807 "io_failed": 0, 00:28:16.807 "io_timeout": 0, 00:28:16.807 "avg_latency_us": 2583.5780461462937, 00:28:16.807 "min_latency_us": 639.7561904761905, 00:28:16.807 "max_latency_us": 45438.293333333335 00:28:16.807 } 00:28:16.807 ], 00:28:16.807 "core_count": 1 00:28:16.807 } 00:28:16.807 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:16.807 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:16.807 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:16.807 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:16.807 | select(.opcode=="crc32c") 00:28:16.807 | "\(.module_name) \(.executed)"' 00:28:16.807 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:17.066 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:17.066 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:17.066 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:17.066 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:17.066 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1800762 00:28:17.066 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1800762 ']' 00:28:17.066 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1800762 00:28:17.066 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:17.066 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.066 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1800762 00:28:17.066 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:17.066 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:17.066 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1800762' 00:28:17.066 killing process with pid 1800762 00:28:17.066 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1800762 00:28:17.066 Received shutdown signal, test time was about 2.000000 seconds 00:28:17.066 00:28:17.066 Latency(us) 00:28:17.066 [2024-12-10T13:30:17.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.066 [2024-12-10T13:30:17.806Z] =================================================================================================================== 00:28:17.066 [2024-12-10T13:30:17.806Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:17.066 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1800762 00:28:17.325 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:17.325 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:17.325 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:17.325 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:17.325 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:17.325 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:17.325 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:17.325 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1801233 00:28:17.325 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1801233 /var/tmp/bperf.sock 00:28:17.325 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:17.325 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1801233 ']' 00:28:17.325 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:17.325 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:17.325 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:17.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:17.325 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:17.325 14:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:17.325 [2024-12-10 14:30:17.989145] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:28:17.325 [2024-12-10 14:30:17.989198] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1801233 ] 00:28:17.584 [2024-12-10 14:30:18.070123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.584 [2024-12-10 14:30:18.110209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.584 14:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:17.584 14:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:17.584 14:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:17.584 14:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:17.584 14:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:17.843 14:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.843 14:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.102 nvme0n1 00:28:18.102 14:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:18.102 14:30:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:18.102 Running I/O for 2 seconds... 00:28:20.416 28784.00 IOPS, 112.44 MiB/s [2024-12-10T13:30:21.156Z] 28783.00 IOPS, 112.43 MiB/s 00:28:20.416 Latency(us) 00:28:20.416 [2024-12-10T13:30:21.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.416 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:20.416 nvme0n1 : 2.00 28794.11 112.48 0.00 0.00 4441.35 1771.03 7208.96 00:28:20.416 [2024-12-10T13:30:21.156Z] =================================================================================================================== 00:28:20.416 [2024-12-10T13:30:21.156Z] Total : 28794.11 112.48 0.00 0.00 4441.35 1771.03 7208.96 00:28:20.416 { 00:28:20.416 "results": [ 00:28:20.416 { 00:28:20.416 "job": "nvme0n1", 00:28:20.416 "core_mask": "0x2", 00:28:20.416 "workload": "randwrite", 00:28:20.416 "status": "finished", 00:28:20.416 "queue_depth": 128, 00:28:20.416 "io_size": 4096, 00:28:20.416 "runtime": 2.003674, 00:28:20.416 "iops": 28794.105228694887, 00:28:20.416 "mibps": 112.4769735495894, 00:28:20.416 "io_failed": 0, 00:28:20.416 "io_timeout": 0, 00:28:20.416 "avg_latency_us": 4441.349230406066, 00:28:20.416 "min_latency_us": 1771.032380952381, 00:28:20.416 "max_latency_us": 7208.96 00:28:20.416 } 00:28:20.416 ], 00:28:20.416 "core_count": 1 00:28:20.416 } 00:28:20.416 14:30:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:20.416 14:30:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:20.416 14:30:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:20.416 14:30:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:20.416 | select(.opcode=="crc32c") 00:28:20.416 | "\(.module_name) \(.executed)"' 00:28:20.416 14:30:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:20.416 14:30:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:20.416 14:30:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:20.416 14:30:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:20.416 14:30:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:20.416 14:30:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1801233 00:28:20.416 14:30:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1801233 ']' 00:28:20.416 14:30:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1801233 00:28:20.416 14:30:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:20.416 14:30:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.416 14:30:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1801233 00:28:20.416 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:20.416 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:20.416 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1801233' 00:28:20.416 killing process with pid 1801233 00:28:20.416 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1801233 00:28:20.416 Received shutdown signal, test time was about 2.000000 seconds 00:28:20.416 00:28:20.416 Latency(us) 00:28:20.416 [2024-12-10T13:30:21.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.416 [2024-12-10T13:30:21.156Z] =================================================================================================================== 00:28:20.416 [2024-12-10T13:30:21.156Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.416 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1801233 00:28:20.675 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:20.675 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:20.675 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:20.675 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:20.675 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:20.675 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:20.675 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:20.675 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1801783 00:28:20.675 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1801783 /var/tmp/bperf.sock 00:28:20.675 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:20.675 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1801783 ']' 00:28:20.675 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:20.675 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.675 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:20.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:20.675 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.675 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:20.675 [2024-12-10 14:30:21.240861] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:28:20.675 [2024-12-10 14:30:21.240911] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1801783 ] 00:28:20.675 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:20.675 Zero copy mechanism will not be used. 00:28:20.675 [2024-12-10 14:30:21.321759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.675 [2024-12-10 14:30:21.362802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.675 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.933 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:20.933 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:20.933 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:20.934 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:21.192 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.192 14:30:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.450 nvme0n1 00:28:21.450 14:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:21.450 14:30:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:21.450 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:21.450 Zero copy mechanism will not be used. 00:28:21.450 Running I/O for 2 seconds... 00:28:23.763 7225.00 IOPS, 903.12 MiB/s [2024-12-10T13:30:24.503Z] 6891.00 IOPS, 861.38 MiB/s 00:28:23.763 Latency(us) 00:28:23.763 [2024-12-10T13:30:24.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.763 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:23.763 nvme0n1 : 2.00 6889.49 861.19 0.00 0.00 2318.52 1739.82 4868.39 00:28:23.763 [2024-12-10T13:30:24.503Z] =================================================================================================================== 00:28:23.763 [2024-12-10T13:30:24.503Z] Total : 6889.49 861.19 0.00 0.00 2318.52 1739.82 4868.39 00:28:23.763 { 00:28:23.763 "results": [ 00:28:23.763 { 00:28:23.763 "job": "nvme0n1", 00:28:23.763 "core_mask": "0x2", 00:28:23.763 "workload": "randwrite", 00:28:23.763 "status": "finished", 00:28:23.763 "queue_depth": 16, 00:28:23.763 "io_size": 131072, 00:28:23.763 "runtime": 2.003341, 00:28:23.763 "iops": 6889.491105108916, 00:28:23.763 "mibps": 861.1863881386145, 00:28:23.763 "io_failed": 0, 00:28:23.763 "io_timeout": 0, 00:28:23.763 "avg_latency_us": 2318.521522760677, 00:28:23.763 "min_latency_us": 1739.824761904762, 00:28:23.763 "max_latency_us": 4868.388571428572 00:28:23.763 } 00:28:23.763 ], 00:28:23.763 "core_count": 1 00:28:23.763 } 00:28:23.763 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:23.763 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:23.763 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:23.763 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:23.763 | select(.opcode=="crc32c") 00:28:23.763 | "\(.module_name) \(.executed)"' 00:28:23.763 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:23.763 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:23.763 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:23.763 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:23.763 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:23.763 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1801783 00:28:23.763 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1801783 ']' 00:28:23.763 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1801783 00:28:23.763 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:23.764 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.764 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1801783 00:28:23.764 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:23.764 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:23.764 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1801783' 00:28:23.764 killing process with pid 1801783 00:28:23.764 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1801783 00:28:23.764 Received shutdown signal, test time was about 2.000000 seconds 00:28:23.764 00:28:23.764 Latency(us) 00:28:23.764 [2024-12-10T13:30:24.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.764 [2024-12-10T13:30:24.504Z] =================================================================================================================== 00:28:23.764 [2024-12-10T13:30:24.504Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.764 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1801783 00:28:24.023 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1800053 00:28:24.023 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1800053 ']' 00:28:24.023 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1800053 00:28:24.023 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:24.023 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:24.023 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1800053 00:28:24.023 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:24.023 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:24.023 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1800053' 00:28:24.023 killing process with pid 1800053 00:28:24.023 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1800053 00:28:24.023 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1800053 00:28:24.282 00:28:24.282 real 0m14.693s 00:28:24.282 user 0m27.608s 00:28:24.282 sys 0m4.634s 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.282 ************************************ 00:28:24.282 END TEST nvmf_digest_clean 00:28:24.282 ************************************ 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:24.282 ************************************ 00:28:24.282 START TEST nvmf_digest_error 00:28:24.282 ************************************ 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1802409 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1802409 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1802409 ']' 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:24.282 14:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.282 [2024-12-10 14:30:24.973980] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:28:24.282 [2024-12-10 14:30:24.974025] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.540 [2024-12-10 14:30:25.057453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.540 [2024-12-10 14:30:25.092438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.540 [2024-12-10 14:30:25.092471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.540 [2024-12-10 14:30:25.092478] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.540 [2024-12-10 14:30:25.092483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.540 [2024-12-10 14:30:25.092488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.540 [2024-12-10 14:30:25.093023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.106 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.106 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:25.106 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:25.106 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:25.106 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.106 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.106 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:25.106 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.106 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.106 [2024-12-10 14:30:25.843179] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.365 null0 00:28:25.365 [2024-12-10 14:30:25.933709] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.365 [2024-12-10 14:30:25.957906] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1802652 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1802652 /var/tmp/bperf.sock 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1802652 ']' 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:25.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.365 14:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.365 [2024-12-10 14:30:26.011684] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:28:25.365 [2024-12-10 14:30:26.011726] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1802652 ] 00:28:25.365 [2024-12-10 14:30:26.090213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.624 [2024-12-10 14:30:26.131344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.624 14:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.624 14:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:25.624 14:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:25.624 14:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:25.882 14:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:25.882 14:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.882 14:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.882 14:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.882 14:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.882 14:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.141 nvme0n1 00:28:26.141 14:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:26.141 14:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.141 14:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.141 14:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.141 14:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:26.141 14:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:26.400 Running I/O for 2 seconds... 00:28:26.400 [2024-12-10 14:30:26.919738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.400 [2024-12-10 14:30:26.919771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.400 [2024-12-10 14:30:26.919781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.400 [2024-12-10 14:30:26.930768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.400 [2024-12-10 14:30:26.930792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.400 [2024-12-10 14:30:26.930801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.400 [2024-12-10 14:30:26.943455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.400 [2024-12-10 14:30:26.943479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.400 [2024-12-10 14:30:26.943488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.400 [2024-12-10 14:30:26.952454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.400 [2024-12-10 14:30:26.952476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.400 [2024-12-10 14:30:26.952485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.400 [2024-12-10 14:30:26.964243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.400 [2024-12-10 14:30:26.964264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.400 [2024-12-10 14:30:26.964272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.400 [2024-12-10 14:30:26.975518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.400 [2024-12-10 14:30:26.975540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.400 [2024-12-10 14:30:26.975548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.400 [2024-12-10 14:30:26.986224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.401 [2024-12-10 14:30:26.986244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.401 [2024-12-10 14:30:26.986252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.401 [2024-12-10 14:30:26.994783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.401 [2024-12-10 14:30:26.994808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.401 [2024-12-10 14:30:26.994816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.401 [2024-12-10 14:30:27.006770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.401 [2024-12-10 14:30:27.006792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.401 [2024-12-10 14:30:27.006799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.401 [2024-12-10 14:30:27.014808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.401 [2024-12-10 14:30:27.014828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.401 [2024-12-10 14:30:27.014836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.401 [2024-12-10 14:30:27.024995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.401 [2024-12-10 14:30:27.025015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.401 [2024-12-10 14:30:27.025023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.401 [2024-12-10 14:30:27.035897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.401 [2024-12-10 14:30:27.035916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.401 [2024-12-10 14:30:27.035925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.401 [2024-12-10 14:30:27.043501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.401 [2024-12-10 14:30:27.043521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.401 [2024-12-10 14:30:27.043528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.401 [2024-12-10 14:30:27.053742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.401 [2024-12-10 14:30:27.053762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.401 [2024-12-10 14:30:27.053770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.401 [2024-12-10 14:30:27.064616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.401 [2024-12-10 14:30:27.064636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.401 [2024-12-10 14:30:27.064644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.401 [2024-12-10 14:30:27.075836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.401 [2024-12-10 14:30:27.075856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.401 [2024-12-10 14:30:27.075864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.401 [2024-12-10 14:30:27.088002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.401 [2024-12-10 14:30:27.088021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.401 [2024-12-10 14:30:27.088029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.401 [2024-12-10 14:30:27.096570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.401 [2024-12-10 14:30:27.096591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.401 [2024-12-10 14:30:27.096599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.401 [2024-12-10 14:30:27.108395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.401 [2024-12-10 14:30:27.108416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.401 [2024-12-10 14:30:27.108424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.401 [2024-12-10 14:30:27.116786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.401 [2024-12-10 14:30:27.116807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.401 [2024-12-10 14:30:27.116815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.401 [2024-12-10 14:30:27.127863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.401 [2024-12-10 14:30:27.127883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.401 [2024-12-10 14:30:27.127891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.401 [2024-12-10 14:30:27.137271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.401 [2024-12-10 14:30:27.137291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.401 [2024-12-10 14:30:27.137299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.660 [2024-12-10 14:30:27.148460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.660 [2024-12-10 14:30:27.148480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.660 [2024-12-10 14:30:27.148487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.660 [2024-12-10 14:30:27.159251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.660 [2024-12-10 14:30:27.159270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.660 [2024-12-10 14:30:27.159278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.660 [2024-12-10 14:30:27.168112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.660 [2024-12-10 14:30:27.168136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.660 [2024-12-10 14:30:27.168143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.660 [2024-12-10 14:30:27.180056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.660 [2024-12-10 14:30:27.180076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.660 [2024-12-10 14:30:27.180084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.660 [2024-12-10 14:30:27.191429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.660 [2024-12-10 14:30:27.191449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.660 [2024-12-10 14:30:27.191457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.660 [2024-12-10 14:30:27.203164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.660 [2024-12-10 14:30:27.203184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.660 [2024-12-10 14:30:27.203191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.660 [2024-12-10 14:30:27.215236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.660 [2024-12-10 14:30:27.215256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.660 [2024-12-10 14:30:27.215264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.661 [2024-12-10 14:30:27.223476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.661 [2024-12-10 14:30:27.223496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.661 [2024-12-10 14:30:27.223504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.661 [2024-12-10 14:30:27.235437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.661 [2024-12-10 14:30:27.235456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.661 [2024-12-10 14:30:27.235464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.661 [2024-12-10 14:30:27.248024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.661 [2024-12-10 14:30:27.248043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.661 [2024-12-10 14:30:27.248051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.661 [2024-12-10 14:30:27.256260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.661 [2024-12-10 14:30:27.256278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.661 [2024-12-10 14:30:27.256286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.661 [2024-12-10 14:30:27.270581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.661 [2024-12-10 14:30:27.270600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.661 [2024-12-10 14:30:27.270608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.661 [2024-12-10 14:30:27.279467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.661 [2024-12-10 14:30:27.279487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.661 [2024-12-10 14:30:27.279495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.661 [2024-12-10 14:30:27.292055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.661 [2024-12-10 14:30:27.292075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.661 [2024-12-10 14:30:27.292083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.661 [2024-12-10 14:30:27.303199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.661 [2024-12-10 14:30:27.303223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.661 [2024-12-10 14:30:27.303232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.661 [2024-12-10 14:30:27.316136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.661 [2024-12-10 14:30:27.316156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.661 [2024-12-10 14:30:27.316164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.661 [2024-12-10 14:30:27.325756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.661 [2024-12-10 14:30:27.325776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.661 [2024-12-10 14:30:27.325783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.661 [2024-12-10 14:30:27.334236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.661 [2024-12-10 14:30:27.334256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.661 [2024-12-10 14:30:27.334263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.661 [2024-12-10 14:30:27.345693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.661 [2024-12-10 14:30:27.345713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.661 [2024-12-10 14:30:27.345721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.661 [2024-12-10 14:30:27.358310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.661 [2024-12-10 14:30:27.358329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.661 [2024-12-10 14:30:27.358340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.661 [2024-12-10 14:30:27.368543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.661 [2024-12-10 14:30:27.368562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.661 [2024-12-10 14:30:27.368570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.661 [2024-12-10 14:30:27.377253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.661 [2024-12-10 14:30:27.377272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.661 [2024-12-10 14:30:27.377280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.661 [2024-12-10 14:30:27.389392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.661 [2024-12-10 14:30:27.389411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.661 [2024-12-10 14:30:27.389419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.401988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.402008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.402015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.414715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.414734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.414742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.425471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.425490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.425497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.438213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.438237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.438245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.449503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.449522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.449530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.461711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.461737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.461745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.470403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.470423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.470431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.481710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.481729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.481738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.491035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.491054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.491061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.499285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.499304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.499312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.509821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.509840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.509848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.520191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.520210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.520223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.528840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.528859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.528867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.541049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.541068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.541076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.549192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.549211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.549225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.559421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.559440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.559448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.567707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.567726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.567734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.578419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.578439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.578446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.590141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.920 [2024-12-10 14:30:27.590162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.920 [2024-12-10 14:30:27.590170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.920 [2024-12-10 14:30:27.598525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.921 [2024-12-10 14:30:27.598544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.921 [2024-12-10 14:30:27.598553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.921 [2024-12-10 14:30:27.610372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.921 [2024-12-10 14:30:27.610391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.921 [2024-12-10 14:30:27.610398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.921 [2024-12-10 14:30:27.621576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.921 [2024-12-10 14:30:27.621595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.921 [2024-12-10 14:30:27.621603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.921 [2024-12-10 14:30:27.629824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.921 [2024-12-10 14:30:27.629851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.921 [2024-12-10 14:30:27.629859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.921 [2024-12-10 14:30:27.641834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.921 [2024-12-10 14:30:27.641854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.921 [2024-12-10 14:30:27.641862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.921 [2024-12-10 14:30:27.650160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:26.921 [2024-12-10 14:30:27.650179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.921 [2024-12-10 14:30:27.650187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.661858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.180 [2024-12-10 14:30:27.661878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.180 [2024-12-10 14:30:27.661885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.671878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.180 [2024-12-10 14:30:27.671898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.180 [2024-12-10 14:30:27.671906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.682750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.180 [2024-12-10 14:30:27.682771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.180 [2024-12-10 14:30:27.682779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.691563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.180 [2024-12-10 14:30:27.691583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.180 [2024-12-10 14:30:27.691591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.703941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.180 [2024-12-10 14:30:27.703961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.180 [2024-12-10 14:30:27.703969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.714865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.180 [2024-12-10 14:30:27.714885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.180 [2024-12-10 14:30:27.714893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.723613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.180 [2024-12-10 14:30:27.723633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.180 [2024-12-10 14:30:27.723640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.735424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.180 [2024-12-10 14:30:27.735445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.180 [2024-12-10 14:30:27.735452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.744284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.180 [2024-12-10 14:30:27.744304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.180 [2024-12-10 14:30:27.744311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.755749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.180 [2024-12-10 14:30:27.755771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.180 [2024-12-10 14:30:27.755778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.767329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.180 [2024-12-10 14:30:27.767349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.180 [2024-12-10 14:30:27.767357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.779241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.180 [2024-12-10 14:30:27.779261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.180 [2024-12-10 14:30:27.779269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.787206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.180 [2024-12-10 14:30:27.787230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.180 [2024-12-10 14:30:27.787238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.797205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.180 [2024-12-10 14:30:27.797231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.180 [2024-12-10 14:30:27.797239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.807330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.180 [2024-12-10 14:30:27.807348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.180 [2024-12-10 14:30:27.807360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.816655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.180 [2024-12-10 14:30:27.816674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.180 [2024-12-10 14:30:27.816682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.826447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.180 [2024-12-10 14:30:27.826466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.180 [2024-12-10 14:30:27.826474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.180 [2024-12-10 14:30:27.836934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.181 [2024-12-10 14:30:27.836953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.181 [2024-12-10 14:30:27.836961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.181 [2024-12-10 14:30:27.845860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.181 [2024-12-10 14:30:27.845879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.181 [2024-12-10 14:30:27.845886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.181 [2024-12-10 14:30:27.858262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.181 [2024-12-10 14:30:27.858282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.181 [2024-12-10 14:30:27.858289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.181 [2024-12-10 14:30:27.869063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.181 [2024-12-10 14:30:27.869083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.181 [2024-12-10 14:30:27.869091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.181 [2024-12-10 14:30:27.877258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.181 [2024-12-10 14:30:27.877277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.181 [2024-12-10 14:30:27.877285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.181 [2024-12-10 14:30:27.887111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.181 [2024-12-10 14:30:27.887133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.181 [2024-12-10 14:30:27.887140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.181 [2024-12-10 14:30:27.898050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.181 [2024-12-10 14:30:27.898074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.181 [2024-12-10 14:30:27.898082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.181 24077.00 IOPS, 94.05 MiB/s [2024-12-10T13:30:27.921Z] [2024-12-10 14:30:27.908457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.181 [2024-12-10 14:30:27.908477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.181 [2024-12-10 14:30:27.908486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.181 [2024-12-10 14:30:27.917804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.181 [2024-12-10 14:30:27.917823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.181 [2024-12-10 14:30:27.917832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:27.926147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.440 [2024-12-10 14:30:27.926166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.440 [2024-12-10 14:30:27.926174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:27.935670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.440 [2024-12-10 14:30:27.935689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.440 [2024-12-10 14:30:27.935696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:27.944557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.440 [2024-12-10 14:30:27.944585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.440 [2024-12-10 14:30:27.944593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:27.955748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.440 [2024-12-10 14:30:27.955767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.440 [2024-12-10 14:30:27.955775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:27.964841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.440 [2024-12-10 14:30:27.964860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.440 [2024-12-10 14:30:27.964867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:27.974791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.440 [2024-12-10 14:30:27.974811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.440 [2024-12-10 14:30:27.974819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:27.983402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.440 [2024-12-10 14:30:27.983421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.440 [2024-12-10 14:30:27.983429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:27.992480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.440 [2024-12-10 14:30:27.992498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.440 [2024-12-10 14:30:27.992506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:28.001676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.440 [2024-12-10 14:30:28.001695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.440 [2024-12-10 14:30:28.001703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:28.011225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.440 [2024-12-10 14:30:28.011244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.440 [2024-12-10 14:30:28.011252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:28.022335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.440 [2024-12-10 14:30:28.022354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.440 [2024-12-10 14:30:28.022362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:28.034434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.440 [2024-12-10 14:30:28.034453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.440 [2024-12-10 14:30:28.034460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:28.045627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.440 [2024-12-10 14:30:28.045646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.440 [2024-12-10 14:30:28.045654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:28.054264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.440 [2024-12-10 14:30:28.054284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.440 [2024-12-10 14:30:28.054291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:28.066801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.440 [2024-12-10 14:30:28.066823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.440 [2024-12-10 14:30:28.066831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:28.076766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.440 [2024-12-10 14:30:28.076785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.440 [2024-12-10 14:30:28.076793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:28.087270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.440 [2024-12-10 14:30:28.087289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.440 [2024-12-10 14:30:28.087297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.440 [2024-12-10 14:30:28.095282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.441 [2024-12-10 14:30:28.095302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.441 [2024-12-10 14:30:28.095309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.441 [2024-12-10 14:30:28.107192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.441 [2024-12-10 14:30:28.107212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.441 [2024-12-10 14:30:28.107225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.441 [2024-12-10 14:30:28.117821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.441 [2024-12-10 14:30:28.117841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.441 [2024-12-10 14:30:28.117849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.441 [2024-12-10 14:30:28.127889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.441 [2024-12-10 14:30:28.127909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.441 [2024-12-10 14:30:28.127917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.441 [2024-12-10 14:30:28.138378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.441 [2024-12-10 14:30:28.138398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.441 [2024-12-10 14:30:28.138406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.441 [2024-12-10 14:30:28.146395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.441 [2024-12-10 14:30:28.146416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.441 [2024-12-10 14:30:28.146424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.441 [2024-12-10 14:30:28.155250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.441 [2024-12-10 14:30:28.155270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.441 [2024-12-10 14:30:28.155277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.441 [2024-12-10 14:30:28.166027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.441 [2024-12-10 14:30:28.166048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.441 [2024-12-10 14:30:28.166055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.441 [2024-12-10 14:30:28.175763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.441 [2024-12-10 14:30:28.175784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.441 [2024-12-10 14:30:28.175792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.700 [2024-12-10 14:30:28.186386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.700 [2024-12-10 14:30:28.186407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.700 [2024-12-10 14:30:28.186415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.700 [2024-12-10 14:30:28.199082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.700 [2024-12-10 14:30:28.199102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.700 [2024-12-10 14:30:28.199110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.700 [2024-12-10 14:30:28.211941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.700 [2024-12-10 14:30:28.211961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.700 [2024-12-10 14:30:28.211969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.700 [2024-12-10 14:30:28.220930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.700 [2024-12-10 14:30:28.220950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.700 [2024-12-10 14:30:28.220958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.700 [2024-12-10 14:30:28.229490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.700 [2024-12-10 14:30:28.229510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.700 [2024-12-10 14:30:28.229518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.700 [2024-12-10 14:30:28.239892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.700 [2024-12-10 14:30:28.239911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.700 [2024-12-10 14:30:28.239922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.700 [2024-12-10 14:30:28.249479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.700 [2024-12-10 14:30:28.249499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.700 [2024-12-10 14:30:28.249507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.700 [2024-12-10 14:30:28.259577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.700 [2024-12-10 14:30:28.259597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.700 [2024-12-10 14:30:28.259605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.700 [2024-12-10 14:30:28.270262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.700 [2024-12-10 14:30:28.270281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.700 [2024-12-10 14:30:28.270288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.700 [2024-12-10 14:30:28.278880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.700 [2024-12-10 14:30:28.278900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.700 [2024-12-10 14:30:28.278907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.700 [2024-12-10 14:30:28.289471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.700 [2024-12-10 14:30:28.289491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.700 [2024-12-10 14:30:28.289499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.700 [2024-12-10 14:30:28.301912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.700 [2024-12-10 14:30:28.301932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.700 [2024-12-10 14:30:28.301940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.700 [2024-12-10 14:30:28.312905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.700 [2024-12-10 14:30:28.312925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.700 [2024-12-10 14:30:28.312933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.701 [2024-12-10 14:30:28.322352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.701 [2024-12-10 14:30:28.322371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.701 [2024-12-10 14:30:28.322379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.701 [2024-12-10 14:30:28.331304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.701 [2024-12-10 14:30:28.331326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.701 [2024-12-10 14:30:28.331334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.701 [2024-12-10 14:30:28.340413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.701 [2024-12-10 14:30:28.340432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.701 [2024-12-10 14:30:28.340440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.701 [2024-12-10 14:30:28.349468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.701 [2024-12-10 14:30:28.349489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.701 [2024-12-10 14:30:28.349496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.701 [2024-12-10 14:30:28.359022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.701 [2024-12-10 14:30:28.359042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.701 [2024-12-10 14:30:28.359050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.701 [2024-12-10 14:30:28.369755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.701 [2024-12-10 14:30:28.369776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.701 [2024-12-10 14:30:28.369784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.701 [2024-12-10 14:30:28.378141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.701 [2024-12-10 14:30:28.378161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.701 [2024-12-10 14:30:28.378170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.701 [2024-12-10 14:30:28.388096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.701 [2024-12-10 14:30:28.388116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.701 [2024-12-10 14:30:28.388124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.701 [2024-12-10 14:30:28.396895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.701 [2024-12-10 14:30:28.396915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.701 [2024-12-10 14:30:28.396923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.701 [2024-12-10 14:30:28.406452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.701 [2024-12-10 14:30:28.406472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.701 [2024-12-10 14:30:28.406480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.701 [2024-12-10 14:30:28.416789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.701 [2024-12-10 14:30:28.416809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.701 [2024-12-10 14:30:28.416817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.701 [2024-12-10 14:30:28.426049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.701 [2024-12-10 14:30:28.426069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.701 [2024-12-10 14:30:28.426077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.701 [2024-12-10 14:30:28.434686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.701 [2024-12-10 14:30:28.434706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.701 [2024-12-10 14:30:28.434714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.960 [2024-12-10 14:30:28.446996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.960 [2024-12-10 14:30:28.447017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.960 [2024-12-10 14:30:28.447025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.960 [2024-12-10 14:30:28.455712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.960 [2024-12-10 14:30:28.455734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.960 [2024-12-10 14:30:28.455743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.960 [2024-12-10 14:30:28.466580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.960 [2024-12-10 14:30:28.466600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.960 [2024-12-10 14:30:28.466607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.960 [2024-12-10 14:30:28.475768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.960 [2024-12-10 14:30:28.475787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.960 [2024-12-10 14:30:28.475795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.960 [2024-12-10 14:30:28.485177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.960 [2024-12-10 14:30:28.485196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.960 [2024-12-10 14:30:28.485203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.960 [2024-12-10 14:30:28.494609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.960 [2024-12-10 14:30:28.494633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.494640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.504722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.504741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.504749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.514050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.514070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.514078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.522988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.523008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.523016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.534948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.534968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.534976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.546321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.546340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.546348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.556143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.556162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.556170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.564310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.564330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.564337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.574621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.574640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.574648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.584489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.584508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.584516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.593342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.593361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.593368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.602195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.602213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.602227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.611381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.611400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.611408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.620933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.620952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.620960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.631320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.631340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.631347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.640574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.640593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.640601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.650221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.650240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.650248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.659597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.659615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.659626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.668984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.669004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.669012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.678102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.678121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.678129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.688011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.688030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.688038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.961 [2024-12-10 14:30:28.697819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:27.961 [2024-12-10 14:30:28.697838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.961 [2024-12-10 14:30:28.697847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.708659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.708678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.708685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.721141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.721160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.721167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.730214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.730239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.730246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.742344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.742364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.742373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.753912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.753937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.753945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.765483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.765503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.765510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.774372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.774391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.774399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.786190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.786210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.786222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.797483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.797502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.797509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.805901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.805921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.805929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.818565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.818585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.818592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.829569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.829588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.829597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.840995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.841014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.841022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.851949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.851968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.851976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.859753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.859772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.859780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.870814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.870833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.870840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.880383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.880409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.880417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.890625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.890645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.890652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 [2024-12-10 14:30:28.900064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1af0dd0) 00:28:28.221 [2024-12-10 14:30:28.900084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.221 [2024-12-10 14:30:28.900091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.221 24775.50 IOPS, 96.78 MiB/s 00:28:28.221 Latency(us) 00:28:28.221 [2024-12-10T13:30:28.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.221 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:28.222 nvme0n1 : 2.00 24793.67 96.85 0.00 0.00 5157.89 2559.02 17975.59 00:28:28.222 [2024-12-10T13:30:28.962Z] =================================================================================================================== 00:28:28.222 [2024-12-10T13:30:28.962Z] Total : 24793.67 96.85 0.00 0.00 5157.89 2559.02 17975.59 00:28:28.222 { 00:28:28.222 "results": [ 00:28:28.222 { 00:28:28.222 "job": "nvme0n1", 00:28:28.222 "core_mask": "0x2", 00:28:28.222 "workload": "randread", 00:28:28.222 "status": "finished", 00:28:28.222 "queue_depth": 128, 00:28:28.222 "io_size": 4096, 00:28:28.222 "runtime": 2.003697, 00:28:28.222 "iops": 24793.668903032743, 00:28:28.222 "mibps": 96.85026915247165, 00:28:28.222 "io_failed": 0, 00:28:28.222 "io_timeout": 0, 00:28:28.222 "avg_latency_us": 5157.890293857998, 00:28:28.222 "min_latency_us": 2559.024761904762, 00:28:28.222 "max_latency_us": 17975.588571428572 00:28:28.222 } 00:28:28.222 ], 00:28:28.222 "core_count": 1 00:28:28.222 } 00:28:28.222 14:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:28.222 14:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:28.222 14:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:28.222 | .driver_specific 00:28:28.222 | .nvme_error 00:28:28.222 | .status_code 00:28:28.222 | .command_transient_transport_error' 00:28:28.222 14:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:28.481 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 194 > 0 )) 00:28:28.481 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1802652 00:28:28.481 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1802652 ']' 00:28:28.481 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1802652 00:28:28.481 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:28.481 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:28.481 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1802652 00:28:28.481 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:28.481 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:28.481 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1802652' 00:28:28.481 killing process with pid 1802652 00:28:28.481 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1802652 00:28:28.481 Received shutdown signal, test time was about 2.000000 seconds 00:28:28.481 00:28:28.481 Latency(us) 00:28:28.481 [2024-12-10T13:30:29.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.481 [2024-12-10T13:30:29.221Z] =================================================================================================================== 00:28:28.481 [2024-12-10T13:30:29.221Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:28.481 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1802652 00:28:28.740 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:28.740 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:28.740 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:28.740 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:28.740 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:28.740 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1803120 00:28:28.740 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1803120 /var/tmp/bperf.sock 00:28:28.740 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:28.740 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1803120 ']' 00:28:28.740 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:28.740 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.740 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:28.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:28.740 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.740 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:28.740 [2024-12-10 14:30:29.375894] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:28:28.740 [2024-12-10 14:30:29.375941] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1803120 ] 00:28:28.740 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:28.740 Zero copy mechanism will not be used. 00:28:28.740 [2024-12-10 14:30:29.457212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.999 [2024-12-10 14:30:29.494136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.999 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.999 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:28.999 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:28.999 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:29.258 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:29.258 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.258 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.258 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.258 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.258 14:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.517 nvme0n1 00:28:29.517 14:30:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:29.517 14:30:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.517 14:30:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.517 14:30:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.517 14:30:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:29.517 14:30:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:29.791 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:29.791 Zero copy mechanism will not be used. 00:28:29.791 Running I/O for 2 seconds... 00:28:29.791 [2024-12-10 14:30:30.330567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.791 [2024-12-10 14:30:30.330603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.791 [2024-12-10 14:30:30.330614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.791 [2024-12-10 14:30:30.335862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.791 [2024-12-10 14:30:30.335891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.791 [2024-12-10 14:30:30.335900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.791 [2024-12-10 14:30:30.341042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.791 [2024-12-10 14:30:30.341063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.791 [2024-12-10 14:30:30.341071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.791 [2024-12-10 14:30:30.346225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.791 [2024-12-10 14:30:30.346246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.346254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.351361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.351382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.351391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.356476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.356497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.356505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.361610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.361631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.361639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.366733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.366754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.366762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.371845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.371865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.371873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.376936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.376957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.376965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.382040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.382061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.382070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.387431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.387452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.387460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.394659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.394680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.394687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.400256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.400277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.400284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.405031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.405052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.405060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.409797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.409817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.409825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.414551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.414572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.414580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.419479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.419499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.419507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.424322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.424344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.424355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.429430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.429451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.429459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.434646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.434667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.434675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.439796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.439816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.439824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.444994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.445014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.445022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.450118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.450140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.450148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.455255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.455276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.455284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.460396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.460417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.460425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.466459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.466481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.466489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.471705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.471730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.471737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.476797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.476817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.792 [2024-12-10 14:30:30.476825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.792 [2024-12-10 14:30:30.481938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.792 [2024-12-10 14:30:30.481958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.793 [2024-12-10 14:30:30.481966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.793 [2024-12-10 14:30:30.487047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.793 [2024-12-10 14:30:30.487067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.793 [2024-12-10 14:30:30.487075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.793 [2024-12-10 14:30:30.492162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.793 [2024-12-10 14:30:30.492184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.793 [2024-12-10 14:30:30.492192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.793 [2024-12-10 14:30:30.497317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.793 [2024-12-10 14:30:30.497338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.793 [2024-12-10 14:30:30.497346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.793 [2024-12-10 14:30:30.502421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.793 [2024-12-10 14:30:30.502442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.793 [2024-12-10 14:30:30.502449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.793 [2024-12-10 14:30:30.507491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.793 [2024-12-10 14:30:30.507511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.793 [2024-12-10 14:30:30.507519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.793 [2024-12-10 14:30:30.512590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.793 [2024-12-10 14:30:30.512611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.793 [2024-12-10 14:30:30.512620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.793 [2024-12-10 14:30:30.517735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.793 [2024-12-10 14:30:30.517755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.793 [2024-12-10 14:30:30.517763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.793 [2024-12-10 14:30:30.522843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:29.793 [2024-12-10 14:30:30.522864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.793 [2024-12-10 14:30:30.522871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.053 [2024-12-10 14:30:30.527985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.053 [2024-12-10 14:30:30.528006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.053 [2024-12-10 14:30:30.528013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.053 [2024-12-10 14:30:30.533130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.053 [2024-12-10 14:30:30.533151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.533159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.538249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.538269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.538277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.543411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.543432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.543440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.546857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.546878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.546886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.550749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.550770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.550778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.555947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.555975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.555982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.561198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.561224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.561232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.566353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.566375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.566383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.571676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.571697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.571706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.576831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.576852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.576859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.581998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.582019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.582027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.587248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.587269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.587277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.592494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.592515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.592523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.597636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.597656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.597664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.602822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.602843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.602852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.607983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.608003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.608011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.613130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.613151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.613159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.618262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.618282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.618290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.623404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.623425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.623433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.628107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.628128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.628136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.633049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.633070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.633080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.637969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.637989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.637997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.643018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.643038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.643050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.648126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.648146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.648154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.653276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.653297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.653305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.658447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.658466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.658474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.663626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.663646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.054 [2024-12-10 14:30:30.663653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.054 [2024-12-10 14:30:30.668779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.054 [2024-12-10 14:30:30.668800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.668808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.673955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.673976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.673984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.679075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.679095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.679102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.684224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.684244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.684251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.689378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.689403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.689410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.694511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.694531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.694538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.699624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.699644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.699652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.704748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.704768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.704776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.709902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.709923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.709930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.715093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.715114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.715121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.720273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.720294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.720302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.725433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.725453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.725461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.730558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.730578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.730586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.735691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.735712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.735719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.740768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.740788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.740796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.745859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.745879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.745886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.750986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.751006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.751014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.756185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.756206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.756213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.761333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.761353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.761361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.766454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.766475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.766482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.771544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.771564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.771572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.776658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.776678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.776690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.781738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.781758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.781765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.055 [2024-12-10 14:30:30.786827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.055 [2024-12-10 14:30:30.786848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.055 [2024-12-10 14:30:30.786856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.315 [2024-12-10 14:30:30.791916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.315 [2024-12-10 14:30:30.791937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.315 [2024-12-10 14:30:30.791945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.315 [2024-12-10 14:30:30.796954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.315 [2024-12-10 14:30:30.796974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.315 [2024-12-10 14:30:30.796982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.315 [2024-12-10 14:30:30.802047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.315 [2024-12-10 14:30:30.802067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.315 [2024-12-10 14:30:30.802074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.315 [2024-12-10 14:30:30.807120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.315 [2024-12-10 14:30:30.807141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.315 [2024-12-10 14:30:30.807149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.315 [2024-12-10 14:30:30.812238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.315 [2024-12-10 14:30:30.812257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.315 [2024-12-10 14:30:30.812265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.315 [2024-12-10 14:30:30.817354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.315 [2024-12-10 14:30:30.817374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.817382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.822443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.822467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.822475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.827560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.827580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.827587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.832726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.832747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.832755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.837930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.837951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.837959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.843133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.843154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.843162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.848309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.848330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.848338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.853436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.853457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.853464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.858526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.858546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.858554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.863703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.863724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.863735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.868809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.868829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.868837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.873390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.873410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.873418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.876213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.876238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.876246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.881156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.881177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.881185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.886114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.886134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.886142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.890909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.890929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.890936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.895732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.895752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.895760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.900698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.900719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.900727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.905871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.905896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.905904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.910833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.910854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.910863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.915954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.915974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.915982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.921040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.921061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.921069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.926164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.926184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.926193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.931263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.931284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.931292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.938423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.938443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.938451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.944437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.944457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.944466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.949115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.949136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.949145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.953852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.316 [2024-12-10 14:30:30.953873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.316 [2024-12-10 14:30:30.953881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.316 [2024-12-10 14:30:30.958582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:30.958604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:30.958612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:30.963461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:30.963481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:30.963489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:30.968425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:30.968447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:30.968454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:30.973134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:30.973155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:30.973163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:30.978065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:30.978087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:30.978095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:30.982933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:30.982953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:30.982961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:30.987860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:30.987881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:30.987889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:30.993044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:30.993065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:30.993077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:30.998381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:30.998403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:30.998411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:31.003837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:31.003858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:31.003866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:31.009338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:31.009359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:31.009367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:31.014729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:31.014752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:31.014760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:31.020046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:31.020068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:31.020075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:31.025384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:31.025406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:31.025414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:31.030692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:31.030714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:31.030721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:31.036030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:31.036051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:31.036059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:31.041404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:31.041430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:31.041438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:31.046929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:31.046950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:31.046958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.317 [2024-12-10 14:30:31.052428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.317 [2024-12-10 14:30:31.052449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.317 [2024-12-10 14:30:31.052457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.577 [2024-12-10 14:30:31.057872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.577 [2024-12-10 14:30:31.057893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.057901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.063300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.063321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.063329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.068612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.068633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.068641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.073986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.074007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.074014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.079303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.079323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.079331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.084643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.084663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.084671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.089953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.089975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.089983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.095342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.095364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.095372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.100761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.100783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.100791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.106384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.106405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.106412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.112303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.112324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.112333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.117733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.117754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.117762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.123152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.123173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.123181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.128526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.128548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.128555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.133839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.133861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.133873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.139241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.139262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.139269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.144556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.144577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.144585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.149910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.149931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.149939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.155291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.155312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.155320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.160638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.160659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.160666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.166043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.166075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.166083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.171551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.171571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.171579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.176904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.176929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.176937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.182294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.182315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.182322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.187701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.187721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.187728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.193114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.193135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.193143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.198536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.198556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.198564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.203900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.578 [2024-12-10 14:30:31.203920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.578 [2024-12-10 14:30:31.203930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.578 [2024-12-10 14:30:31.209350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.209372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.209380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.214670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.214690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.214698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.219974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.219995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.220003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.225450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.225471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.225486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.230946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.230966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.230975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.236203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.236231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.236239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.241515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.241535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.241543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.246895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.246916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.246924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.252228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.252249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.252256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.257536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.257557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.257565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.262858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.262880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.262887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.268157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.268179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.268186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.273530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.273555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.273562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.278875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.278896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.278904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.284249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.284269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.284276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.289598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.289618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.289626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.294934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.294955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.294963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.300235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.300255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.300263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.305562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.305583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.305590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.579 [2024-12-10 14:30:31.310908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.579 [2024-12-10 14:30:31.310933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.579 [2024-12-10 14:30:31.310941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.839 [2024-12-10 14:30:31.316379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.839 [2024-12-10 14:30:31.316400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.839 [2024-12-10 14:30:31.316408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.839 [2024-12-10 14:30:31.321517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.839 [2024-12-10 14:30:31.321538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.839 [2024-12-10 14:30:31.321546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.839 [2024-12-10 14:30:31.326676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.839 [2024-12-10 14:30:31.326697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.839 [2024-12-10 14:30:31.326705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.839 5974.00 IOPS, 746.75 MiB/s [2024-12-10T13:30:31.579Z] [2024-12-10 14:30:31.332808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.839 [2024-12-10 14:30:31.332829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.839 [2024-12-10 14:30:31.332837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.839 [2024-12-10 14:30:31.337897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.839 [2024-12-10 14:30:31.337918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.839 [2024-12-10 14:30:31.337925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.839 [2024-12-10 14:30:31.343186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.839 [2024-12-10 14:30:31.343206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.839 [2024-12-10 14:30:31.343214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.839 [2024-12-10 14:30:31.348725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.839 [2024-12-10 14:30:31.348748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.839 [2024-12-10 14:30:31.348758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.839 [2024-12-10 14:30:31.354076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.839 [2024-12-10 14:30:31.354097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.839 [2024-12-10 14:30:31.354105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.839 [2024-12-10 14:30:31.360210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.360238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.360247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.367589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.367612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.367624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.375323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.375343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.375350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.383002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.383024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.383032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.390866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.390888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.390896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.398542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.398563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.398571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.406363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.406384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.406393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.414144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.414165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.414173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.421388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.421409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.421418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.430160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.430181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.430190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.438354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.438379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.438387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.446472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.446493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.446501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.455365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.455386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.455394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.463990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.464012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.464020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.471968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.471990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.471998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.478555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.478576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.478584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.484857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.484878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.484886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.491825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.491846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.491855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.497268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.497289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.497300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.502581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.502602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.502609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.508084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.508105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.508113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.513500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.513520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.513528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.518936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.518957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.518965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.524216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.524244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.524252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.529868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.529889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.529897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.535425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.535445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.535453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.541226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.541246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.541255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.546724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.840 [2024-12-10 14:30:31.546749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.840 [2024-12-10 14:30:31.546756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.840 [2024-12-10 14:30:31.552142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.841 [2024-12-10 14:30:31.552162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.841 [2024-12-10 14:30:31.552170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.841 [2024-12-10 14:30:31.557408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.841 [2024-12-10 14:30:31.557428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.841 [2024-12-10 14:30:31.557436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.841 [2024-12-10 14:30:31.562732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.841 [2024-12-10 14:30:31.562753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.841 [2024-12-10 14:30:31.562760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.841 [2024-12-10 14:30:31.568069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.841 [2024-12-10 14:30:31.568090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.841 [2024-12-10 14:30:31.568098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.841 [2024-12-10 14:30:31.573372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:30.841 [2024-12-10 14:30:31.573393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.841 [2024-12-10 14:30:31.573400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.100 [2024-12-10 14:30:31.578746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.100 [2024-12-10 14:30:31.578767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.100 [2024-12-10 14:30:31.578774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.100 [2024-12-10 14:30:31.584779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.100 [2024-12-10 14:30:31.584800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.100 [2024-12-10 14:30:31.584808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.100 [2024-12-10 14:30:31.590238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.100 [2024-12-10 14:30:31.590259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.100 [2024-12-10 14:30:31.590267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.100 [2024-12-10 14:30:31.595769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.100 [2024-12-10 14:30:31.595792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.100 [2024-12-10 14:30:31.595800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.100 [2024-12-10 14:30:31.601367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.100 [2024-12-10 14:30:31.601387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.100 [2024-12-10 14:30:31.601394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.606866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.606887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.606895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.612417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.612438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.612446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.617986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.618007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.618014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.623916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.623937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.623945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.629309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.629330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.629338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.635617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.635638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.635647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.641576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.641608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.641621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.647151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.647173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.647181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.652968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.652990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.652998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.656417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.656438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.656446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.663642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.663666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.663674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.671045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.671067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.671076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.678445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.678469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.678477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.685811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.685833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.685841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.693895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.693917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.693926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.700418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.700444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.700452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.706570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.706591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.706600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.713186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.713209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.713224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.720119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.720142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.720152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.728564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.728587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.728596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.736007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.736030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.736039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.743315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.743338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.743346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.749657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.749679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.749687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.101 [2024-12-10 14:30:31.756460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.101 [2024-12-10 14:30:31.756483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.101 [2024-12-10 14:30:31.756492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.102 [2024-12-10 14:30:31.762133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.102 [2024-12-10 14:30:31.762155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.102 [2024-12-10 14:30:31.762163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.102 [2024-12-10 14:30:31.767421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.102 [2024-12-10 14:30:31.767443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.102 [2024-12-10 14:30:31.767451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.102 [2024-12-10 14:30:31.771955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.102 [2024-12-10 14:30:31.771977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.102 [2024-12-10 14:30:31.771985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.102 [2024-12-10 14:30:31.777104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.102 [2024-12-10 14:30:31.777126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.102 [2024-12-10 14:30:31.777134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.102 [2024-12-10 14:30:31.782288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.102 [2024-12-10 14:30:31.782309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.102 [2024-12-10 14:30:31.782318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.102 [2024-12-10 14:30:31.787592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.102 [2024-12-10 14:30:31.787612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.102 [2024-12-10 14:30:31.787620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.102 [2024-12-10 14:30:31.792944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.102 [2024-12-10 14:30:31.792965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.102 [2024-12-10 14:30:31.792973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.102 [2024-12-10 14:30:31.798245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.102 [2024-12-10 14:30:31.798266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.102 [2024-12-10 14:30:31.798274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.102 [2024-12-10 14:30:31.803453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.102 [2024-12-10 14:30:31.803475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.102 [2024-12-10 14:30:31.803487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.102 [2024-12-10 14:30:31.808789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.102 [2024-12-10 14:30:31.808811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.102 [2024-12-10 14:30:31.808819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.102 [2024-12-10 14:30:31.814160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.102 [2024-12-10 14:30:31.814182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.102 [2024-12-10 14:30:31.814190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.102 [2024-12-10 14:30:31.819526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.102 [2024-12-10 14:30:31.819547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.102 [2024-12-10 14:30:31.819555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.102 [2024-12-10 14:30:31.824928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.102 [2024-12-10 14:30:31.824949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.102 [2024-12-10 14:30:31.824957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.102 [2024-12-10 14:30:31.830265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.102 [2024-12-10 14:30:31.830286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.102 [2024-12-10 14:30:31.830294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.102 [2024-12-10 14:30:31.835817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.102 [2024-12-10 14:30:31.835839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.102 [2024-12-10 14:30:31.835848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.362 [2024-12-10 14:30:31.841490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.362 [2024-12-10 14:30:31.841511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.362 [2024-12-10 14:30:31.841520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.362 [2024-12-10 14:30:31.848277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.362 [2024-12-10 14:30:31.848299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.362 [2024-12-10 14:30:31.848309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.362 [2024-12-10 14:30:31.855964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.362 [2024-12-10 14:30:31.855986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.362 [2024-12-10 14:30:31.855994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.362 [2024-12-10 14:30:31.862472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.362 [2024-12-10 14:30:31.862494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.362 [2024-12-10 14:30:31.862502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.362 [2024-12-10 14:30:31.868952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.362 [2024-12-10 14:30:31.868974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.362 [2024-12-10 14:30:31.868982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.362 [2024-12-10 14:30:31.874718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.362 [2024-12-10 14:30:31.874740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.362 [2024-12-10 14:30:31.874748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.362 [2024-12-10 14:30:31.879954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.362 [2024-12-10 14:30:31.879975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.362 [2024-12-10 14:30:31.879983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.362 [2024-12-10 14:30:31.885384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.362 [2024-12-10 14:30:31.885406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.362 [2024-12-10 14:30:31.885414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.362 [2024-12-10 14:30:31.890820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.362 [2024-12-10 14:30:31.890842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.362 [2024-12-10 14:30:31.890851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.362 [2024-12-10 14:30:31.896168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.362 [2024-12-10 14:30:31.896190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.362 [2024-12-10 14:30:31.896198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.362 [2024-12-10 14:30:31.901701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.362 [2024-12-10 14:30:31.901723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.362 [2024-12-10 14:30:31.901734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.362 [2024-12-10 14:30:31.906965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.362 [2024-12-10 14:30:31.906986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.362 [2024-12-10 14:30:31.906994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.362 [2024-12-10 14:30:31.912116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.362 [2024-12-10 14:30:31.912138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:31.912147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:31.917302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:31.917324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:31.917332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:31.922454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:31.922476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:31.922484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:31.927657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:31.927679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:31.927687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:31.932912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:31.932933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:31.932942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:31.938224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:31.938245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:31.938253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:31.943524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:31.943546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:31.943554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:31.948888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:31.948914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:31.948922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:31.954237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:31.954258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:31.954266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:31.959507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:31.959530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:31.959538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:31.964928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:31.964950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:31.964958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:31.970523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:31.970546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:31.970554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:31.976876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:31.976899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:31.976908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:31.984234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:31.984256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:31.984264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:31.989600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:31.989623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:31.989631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:31.994568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:31.994590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:31.994598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:31.999491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:31.999513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:31.999522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:32.004361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:32.004382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:32.004390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:32.009313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:32.009335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:32.009343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:32.014377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:32.014398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:32.014407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:32.019368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:32.019391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:32.019399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:32.024366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:32.024389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:32.024397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:32.029688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:32.029709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:32.029717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:32.035066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:32.035088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:32.035096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:32.040326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:32.040347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:32.040358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:32.045662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:32.045684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:32.045691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:32.051041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:32.051062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:32.051070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.363 [2024-12-10 14:30:32.056317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.363 [2024-12-10 14:30:32.056339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.363 [2024-12-10 14:30:32.056347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.364 [2024-12-10 14:30:32.061437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.364 [2024-12-10 14:30:32.061459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.364 [2024-12-10 14:30:32.061467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.364 [2024-12-10 14:30:32.066688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.364 [2024-12-10 14:30:32.066709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.364 [2024-12-10 14:30:32.066716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.364 [2024-12-10 14:30:32.071879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.364 [2024-12-10 14:30:32.071901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.364 [2024-12-10 14:30:32.071908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.364 [2024-12-10 14:30:32.077050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.364 [2024-12-10 14:30:32.077072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.364 [2024-12-10 14:30:32.077081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.364 [2024-12-10 14:30:32.082323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.364 [2024-12-10 14:30:32.082345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.364 [2024-12-10 14:30:32.082352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.364 [2024-12-10 14:30:32.088110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.364 [2024-12-10 14:30:32.088132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.364 [2024-12-10 14:30:32.088140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.364 [2024-12-10 14:30:32.093799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.364 [2024-12-10 14:30:32.093821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.364 [2024-12-10 14:30:32.093829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.364 [2024-12-10 14:30:32.099156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.364 [2024-12-10 14:30:32.099179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.364 [2024-12-10 14:30:32.099189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.624 [2024-12-10 14:30:32.104300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.624 [2024-12-10 14:30:32.104322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.624 [2024-12-10 14:30:32.104342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.624 [2024-12-10 14:30:32.109465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.624 [2024-12-10 14:30:32.109487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.624 [2024-12-10 14:30:32.109494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.624 [2024-12-10 14:30:32.114655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.624 [2024-12-10 14:30:32.114677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.624 [2024-12-10 14:30:32.114685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.624 [2024-12-10 14:30:32.119829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.624 [2024-12-10 14:30:32.119851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.624 [2024-12-10 14:30:32.119859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.624 [2024-12-10 14:30:32.125020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.624 [2024-12-10 14:30:32.125041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.624 [2024-12-10 14:30:32.125048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.624 [2024-12-10 14:30:32.130156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.624 [2024-12-10 14:30:32.130178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.624 [2024-12-10 14:30:32.130190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.624 [2024-12-10 14:30:32.135268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.624 [2024-12-10 14:30:32.135290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.624 [2024-12-10 14:30:32.135298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.624 [2024-12-10 14:30:32.140509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.624 [2024-12-10 14:30:32.140531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.624 [2024-12-10 14:30:32.140539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.624 [2024-12-10 14:30:32.145877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.624 [2024-12-10 14:30:32.145899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.624 [2024-12-10 14:30:32.145906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.624 [2024-12-10 14:30:32.151364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.624 [2024-12-10 14:30:32.151386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.624 [2024-12-10 14:30:32.151394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.624 [2024-12-10 14:30:32.156729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.624 [2024-12-10 14:30:32.156751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.624 [2024-12-10 14:30:32.156759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.624 [2024-12-10 14:30:32.162094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.624 [2024-12-10 14:30:32.162116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.624 [2024-12-10 14:30:32.162125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.624 [2024-12-10 14:30:32.167368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.624 [2024-12-10 14:30:32.167390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.624 [2024-12-10 14:30:32.167399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.624 [2024-12-10 14:30:32.172791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.624 [2024-12-10 14:30:32.172813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.624 [2024-12-10 14:30:32.172821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.624 [2024-12-10 14:30:32.178121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.624 [2024-12-10 14:30:32.178147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.624 [2024-12-10 14:30:32.178154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.624 [2024-12-10 14:30:32.183183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.624 [2024-12-10 14:30:32.183205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.624 [2024-12-10 14:30:32.183214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.624 [2024-12-10 14:30:32.188525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.624 [2024-12-10 14:30:32.188547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.624 [2024-12-10 14:30:32.188555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.193784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.193806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.193814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.199214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.199242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.199250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.204754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.204775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.204783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.210130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.210152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.210160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.215538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.215561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.215569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.220834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.220856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.220864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.226161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.226183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.226191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.231449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.231471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.231479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.236865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.236889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.236897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.242270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.242292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.242300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.247621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.247643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.247651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.252967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.252989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.252997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.258179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.258201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.258209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.263395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.263417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.263426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.268792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.268814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.268825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.274515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.274537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.274546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.279802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.279824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.279832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.285056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.285077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.285085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.290326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.290348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.290356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.295629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.295650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.295659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.300917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.300939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.300946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.306238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.306259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.306268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.311517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.311538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.311546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.316711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.316736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.316744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.321891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.321913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.321922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.327018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.327041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.327049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.625 [2024-12-10 14:30:32.332195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x178d420) 00:28:31.625 [2024-12-10 14:30:32.332224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.625 [2024-12-10 14:30:32.332232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:31.625 5680.00 IOPS, 710.00 MiB/s 00:28:31.625 Latency(us) 00:28:31.625 [2024-12-10T13:30:32.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.626 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:31.626 nvme0n1 : 2.00 5679.20 709.90 0.00 0.00 2814.55 631.95 8862.96 00:28:31.626 [2024-12-10T13:30:32.366Z] =================================================================================================================== 00:28:31.626 [2024-12-10T13:30:32.366Z] Total : 5679.20 709.90 0.00 0.00 2814.55 631.95 8862.96 00:28:31.626 { 00:28:31.626 "results": [ 00:28:31.626 { 00:28:31.626 "job": "nvme0n1", 00:28:31.626 "core_mask": "0x2", 00:28:31.626 "workload": "randread", 00:28:31.626 "status": "finished", 00:28:31.626 "queue_depth": 16, 00:28:31.626 "io_size": 131072, 00:28:31.626 "runtime": 2.003274, 00:28:31.626 "iops": 5679.203144452531, 00:28:31.626 "mibps": 709.9003930565664, 00:28:31.626 "io_failed": 0, 00:28:31.626 "io_timeout": 0, 00:28:31.626 "avg_latency_us": 2814.5517152818766, 00:28:31.626 "min_latency_us": 631.9542857142857, 00:28:31.626 "max_latency_us": 8862.96380952381 00:28:31.626 } 00:28:31.626 ], 00:28:31.626 "core_count": 1 00:28:31.626 } 00:28:31.626 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:31.626 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:31.626 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:31.626 | .driver_specific 00:28:31.626 | .nvme_error 00:28:31.626 | .status_code 00:28:31.626 | .command_transient_transport_error' 00:28:31.626 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:31.885 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 367 > 0 )) 00:28:31.885 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1803120 00:28:31.885 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1803120 ']' 00:28:31.885 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1803120 00:28:31.885 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:31.885 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:31.885 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1803120 00:28:31.885 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:31.885 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:31.885 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1803120' 00:28:31.885 killing process with pid 1803120 00:28:31.885 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1803120 00:28:31.885 Received shutdown signal, test time was about 2.000000 seconds 00:28:31.885 00:28:31.885 Latency(us) 00:28:31.885 [2024-12-10T13:30:32.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.885 [2024-12-10T13:30:32.625Z] =================================================================================================================== 00:28:31.885 [2024-12-10T13:30:32.625Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:31.885 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1803120 00:28:32.144 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:32.144 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:32.144 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:32.144 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:32.144 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:32.144 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1803775 00:28:32.144 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1803775 /var/tmp/bperf.sock 00:28:32.144 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:32.144 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1803775 ']' 00:28:32.144 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:32.144 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.144 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:32.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:32.144 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.144 14:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.144 [2024-12-10 14:30:32.810961] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:28:32.144 [2024-12-10 14:30:32.811011] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1803775 ] 00:28:32.403 [2024-12-10 14:30:32.887276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.403 [2024-12-10 14:30:32.928522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.403 14:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.403 14:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:32.403 14:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:32.403 14:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:32.661 14:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:32.661 14:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.661 14:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.661 14:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.661 14:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.661 14:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.920 nvme0n1 00:28:32.920 14:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:32.920 14:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.920 14:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.920 14:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.920 14:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:32.920 14:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:32.920 Running I/O for 2 seconds... 00:28:32.920 [2024-12-10 14:30:33.645316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee5220 00:28:32.920 [2024-12-10 14:30:33.646412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.920 [2024-12-10 14:30:33.646441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:32.920 [2024-12-10 14:30:33.654560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016edfdc0 00:28:32.920 [2024-12-10 14:30:33.655711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.920 [2024-12-10 14:30:33.655735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:33.179 [2024-12-10 14:30:33.663972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee0ea0 00:28:33.179 [2024-12-10 14:30:33.665059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.179 [2024-12-10 14:30:33.665079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.672422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee8d30 00:28:33.180 [2024-12-10 14:30:33.673493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.673512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.680913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efb048 00:28:33.180 [2024-12-10 14:30:33.681632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.681651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.690150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eecc78 00:28:33.180 [2024-12-10 14:30:33.690650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.690670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.700507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef35f0 00:28:33.180 [2024-12-10 14:30:33.701800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.701818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.708940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eedd58 00:28:33.180 [2024-12-10 14:30:33.709893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.709912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.717974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef7970 00:28:33.180 [2024-12-10 14:30:33.718853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.718873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.726872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef7970 00:28:33.180 [2024-12-10 14:30:33.727821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.727839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.735891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef7970 00:28:33.180 [2024-12-10 14:30:33.736856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.736874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.744883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef7970 00:28:33.180 [2024-12-10 14:30:33.745815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.745833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.754174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eeea00 00:28:33.180 [2024-12-10 14:30:33.755219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.755239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.762717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee0ea0 00:28:33.180 [2024-12-10 14:30:33.763754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.763773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.772236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef4298 00:28:33.180 [2024-12-10 14:30:33.773395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.773415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.781412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eddc00 00:28:33.180 [2024-12-10 14:30:33.782577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.782596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.790245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eeff18 00:28:33.180 [2024-12-10 14:30:33.790977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.790997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.798754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efd208 00:28:33.180 [2024-12-10 14:30:33.799981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.800000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.807068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eea248 00:28:33.180 [2024-12-10 14:30:33.807774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.807793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.816102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee5ec8 00:28:33.180 [2024-12-10 14:30:33.816802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.816823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.825158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eed0b0 00:28:33.180 [2024-12-10 14:30:33.825921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.825940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.833588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee7c50 00:28:33.180 [2024-12-10 14:30:33.834289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.834311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.843059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee73e0 00:28:33.180 [2024-12-10 14:30:33.843794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.843813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.852667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee73e0 00:28:33.180 [2024-12-10 14:30:33.853411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.853430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.862239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eee5c8 00:28:33.180 [2024-12-10 14:30:33.863322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.863341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.871661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eeb328 00:28:33.180 [2024-12-10 14:30:33.872745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.872765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.879715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efc560 00:28:33.180 [2024-12-10 14:30:33.880430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.880448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.888696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efc560 00:28:33.180 [2024-12-10 14:30:33.889502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.889522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.899726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef6458 00:28:33.180 [2024-12-10 14:30:33.900836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.900856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.907447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef6cc8 00:28:33.180 [2024-12-10 14:30:33.908074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.180 [2024-12-10 14:30:33.908093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:33.180 [2024-12-10 14:30:33.915808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee6300 00:28:33.181 [2024-12-10 14:30:33.916518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.181 [2024-12-10 14:30:33.916538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:33.927070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efd208 00:28:33.440 [2024-12-10 14:30:33.928271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:33.928291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:33.935577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef5be8 00:28:33.440 [2024-12-10 14:30:33.936455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:33.936474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:33.944614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efb8b8 00:28:33.440 [2024-12-10 14:30:33.945464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:33.945483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:33.953746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eef270 00:28:33.440 [2024-12-10 14:30:33.954507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:33.954526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:33.963046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efd208 00:28:33.440 [2024-12-10 14:30:33.964004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:33.964023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:33.972045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efd640 00:28:33.440 [2024-12-10 14:30:33.973140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:33.973160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:33.982600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef81e0 00:28:33.440 [2024-12-10 14:30:33.983938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:33.983956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:33.990207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efb8b8 00:28:33.440 [2024-12-10 14:30:33.991058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:33.991078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:34.000338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eebb98 00:28:33.440 [2024-12-10 14:30:34.001571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:34.001590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:34.008172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee7c50 00:28:33.440 [2024-12-10 14:30:34.008925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:34.008944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:34.017137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efb048 00:28:33.440 [2024-12-10 14:30:34.017885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:34.017905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:34.026406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee8088 00:28:33.440 [2024-12-10 14:30:34.027149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:34.027169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:34.035214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eed0b0 00:28:33.440 [2024-12-10 14:30:34.036057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:34.036077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:34.043815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efc560 00:28:33.440 [2024-12-10 14:30:34.044446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:34.044466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:34.052914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef7100 00:28:33.440 [2024-12-10 14:30:34.053533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:34.053553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:34.061152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee8088 00:28:33.440 [2024-12-10 14:30:34.061836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:34.061855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:34.070592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efa7d8 00:28:33.440 [2024-12-10 14:30:34.071320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:34.071343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:34.081352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efa7d8 00:28:33.440 [2024-12-10 14:30:34.082557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:34.082577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:34.089193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee88f8 00:28:33.440 [2024-12-10 14:30:34.089914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:34.089934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:34.098152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee8d30 00:28:33.440 [2024-12-10 14:30:34.098870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:34.098889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:33.440 [2024-12-10 14:30:34.107521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef5be8 00:28:33.440 [2024-12-10 14:30:34.108471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.440 [2024-12-10 14:30:34.108491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:33.441 [2024-12-10 14:30:34.118448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef5be8 00:28:33.441 [2024-12-10 14:30:34.119982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.441 [2024-12-10 14:30:34.120002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:33.441 [2024-12-10 14:30:34.124995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef8a50 00:28:33.441 [2024-12-10 14:30:34.125791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.441 [2024-12-10 14:30:34.125811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:33.441 [2024-12-10 14:30:34.136153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eeaef0 00:28:33.441 [2024-12-10 14:30:34.137629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.441 [2024-12-10 14:30:34.137648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:33.441 [2024-12-10 14:30:34.145783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efdeb0 00:28:33.441 [2024-12-10 14:30:34.147128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.441 [2024-12-10 14:30:34.147148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:33.441 [2024-12-10 14:30:34.155232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef57b0 00:28:33.441 [2024-12-10 14:30:34.156684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.441 [2024-12-10 14:30:34.156706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:33.441 [2024-12-10 14:30:34.161809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eea248 00:28:33.441 [2024-12-10 14:30:34.162588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.441 [2024-12-10 14:30:34.162608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:33.441 [2024-12-10 14:30:34.172632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eea248 00:28:33.441 [2024-12-10 14:30:34.173851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.441 [2024-12-10 14:30:34.173871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:33.700 [2024-12-10 14:30:34.180711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efb480 00:28:33.700 [2024-12-10 14:30:34.181458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.700 [2024-12-10 14:30:34.181478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:33.700 [2024-12-10 14:30:34.190134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef96f8 00:28:33.700 [2024-12-10 14:30:34.191101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.700 [2024-12-10 14:30:34.191120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:33.700 [2024-12-10 14:30:34.200915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef96f8 00:28:33.700 [2024-12-10 14:30:34.202443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.700 [2024-12-10 14:30:34.202462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:33.700 [2024-12-10 14:30:34.207436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee5ec8 00:28:33.700 [2024-12-10 14:30:34.208188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.700 [2024-12-10 14:30:34.208206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:33.700 [2024-12-10 14:30:34.218007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef57b0 00:28:33.700 [2024-12-10 14:30:34.218994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.700 [2024-12-10 14:30:34.219013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:33.700 [2024-12-10 14:30:34.226373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efc560 00:28:33.700 [2024-12-10 14:30:34.227424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.700 [2024-12-10 14:30:34.227443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:33.700 [2024-12-10 14:30:34.234799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef92c0 00:28:33.700 [2024-12-10 14:30:34.235506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.700 [2024-12-10 14:30:34.235526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:33.700 [2024-12-10 14:30:34.244872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eee190 00:28:33.700 [2024-12-10 14:30:34.246019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.700 [2024-12-10 14:30:34.246038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:33.700 [2024-12-10 14:30:34.253298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef2d80 00:28:33.700 [2024-12-10 14:30:34.254111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.700 [2024-12-10 14:30:34.254130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:33.700 [2024-12-10 14:30:34.262228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eff3c8 00:28:33.700 [2024-12-10 14:30:34.263058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.700 [2024-12-10 14:30:34.263076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:33.700 [2024-12-10 14:30:34.271494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef57b0 00:28:33.700 [2024-12-10 14:30:34.272437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.700 [2024-12-10 14:30:34.272456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:33.700 [2024-12-10 14:30:34.280974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef81e0 00:28:33.700 [2024-12-10 14:30:34.282009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.700 [2024-12-10 14:30:34.282028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:33.700 [2024-12-10 14:30:34.290439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef4f40 00:28:33.700 [2024-12-10 14:30:34.291528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.700 [2024-12-10 14:30:34.291547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:33.700 [2024-12-10 14:30:34.297821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee73e0 00:28:33.700 [2024-12-10 14:30:34.298521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.700 [2024-12-10 14:30:34.298540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:33.700 [2024-12-10 14:30:34.307113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee38d0 00:28:33.700 [2024-12-10 14:30:34.307676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.700 [2024-12-10 14:30:34.307700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:33.700 [2024-12-10 14:30:34.316345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee4140 00:28:33.700 [2024-12-10 14:30:34.317145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.700 [2024-12-10 14:30:34.317163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:33.700 [2024-12-10 14:30:34.325399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efd640 00:28:33.700 [2024-12-10 14:30:34.326243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.700 [2024-12-10 14:30:34.326262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:33.701 [2024-12-10 14:30:34.334465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef7100 00:28:33.701 [2024-12-10 14:30:34.335279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.701 [2024-12-10 14:30:34.335298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:33.701 [2024-12-10 14:30:34.343505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef6020 00:28:33.701 [2024-12-10 14:30:34.344314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.701 [2024-12-10 14:30:34.344333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:33.701 [2024-12-10 14:30:34.352549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef4f40 00:28:33.701 [2024-12-10 14:30:34.353379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.701 [2024-12-10 14:30:34.353398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:33.701 [2024-12-10 14:30:34.361595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee6b70 00:28:33.701 [2024-12-10 14:30:34.362416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.701 [2024-12-10 14:30:34.362434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:33.701 [2024-12-10 14:30:34.370627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016edfdc0 00:28:33.701 [2024-12-10 14:30:34.371420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.701 [2024-12-10 14:30:34.371438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:33.701 [2024-12-10 14:30:34.380654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eff3c8 00:28:33.701 [2024-12-10 14:30:34.381903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.701 [2024-12-10 14:30:34.381922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:33.701 [2024-12-10 14:30:34.389104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef7970 00:28:33.701 [2024-12-10 14:30:34.390023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.701 [2024-12-10 14:30:34.390043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:33.701 [2024-12-10 14:30:34.398256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eed0b0 00:28:33.701 [2024-12-10 14:30:34.398959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.701 [2024-12-10 14:30:34.398979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:33.701 [2024-12-10 14:30:34.407267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eec840 00:28:33.701 [2024-12-10 14:30:34.408329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.701 [2024-12-10 14:30:34.408348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:33.701 [2024-12-10 14:30:34.416039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef5be8 00:28:33.701 [2024-12-10 14:30:34.417082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.701 [2024-12-10 14:30:34.417101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:33.701 [2024-12-10 14:30:34.425244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee0ea0 00:28:33.701 [2024-12-10 14:30:34.425903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.701 [2024-12-10 14:30:34.425922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:33.701 [2024-12-10 14:30:34.434471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef9b30 00:28:33.701 [2024-12-10 14:30:34.435422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.701 [2024-12-10 14:30:34.435441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.443767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef4298 00:28:33.960 [2024-12-10 14:30:34.444619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.444637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.452830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ede470 00:28:33.960 [2024-12-10 14:30:34.453753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.453773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.461890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efc998 00:28:33.960 [2024-12-10 14:30:34.462792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.462810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.470926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016edf550 00:28:33.960 [2024-12-10 14:30:34.471838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.471856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.481132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef5378 00:28:33.960 [2024-12-10 14:30:34.482532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.482550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.490555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eef6a8 00:28:33.960 [2024-12-10 14:30:34.492045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.492063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.496961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eebfd0 00:28:33.960 [2024-12-10 14:30:34.497664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.497683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.506464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016edf550 00:28:33.960 [2024-12-10 14:30:34.507267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.507287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.515924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef2d80 00:28:33.960 [2024-12-10 14:30:34.516823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.516841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.524464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee6300 00:28:33.960 [2024-12-10 14:30:34.525390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.525409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.534555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef3a28 00:28:33.960 [2024-12-10 14:30:34.535581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.535600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.543886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef8618 00:28:33.960 [2024-12-10 14:30:34.545026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.545048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.551582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee01f8 00:28:33.960 [2024-12-10 14:30:34.552041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.552060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.561955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eeea00 00:28:33.960 [2024-12-10 14:30:34.563212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.563236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.571225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef6020 00:28:33.960 [2024-12-10 14:30:34.572479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.572498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.578471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee8d30 00:28:33.960 [2024-12-10 14:30:34.579271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.579290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.587178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ede038 00:28:33.960 [2024-12-10 14:30:34.587832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.587851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.596516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016edf550 00:28:33.960 [2024-12-10 14:30:34.597107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.597126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.605538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee95a0 00:28:33.960 [2024-12-10 14:30:34.606344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.606363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.615610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef3e60 00:28:33.960 [2024-12-10 14:30:34.616453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.960 [2024-12-10 14:30:34.616472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:33.960 [2024-12-10 14:30:34.624961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee0630 00:28:33.961 [2024-12-10 14:30:34.626007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.961 [2024-12-10 14:30:34.626027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:33.961 [2024-12-10 14:30:34.633955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee23b8 00:28:33.961 27926.00 IOPS, 109.09 MiB/s [2024-12-10T13:30:34.701Z] [2024-12-10 14:30:34.635201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.961 [2024-12-10 14:30:34.635226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:33.961 [2024-12-10 14:30:34.643352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eeea00 00:28:33.961 [2024-12-10 14:30:34.644493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.961 [2024-12-10 14:30:34.644513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:33.961 [2024-12-10 14:30:34.652547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef4f40 00:28:33.961 [2024-12-10 14:30:34.653689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.961 [2024-12-10 14:30:34.653708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:33.961 [2024-12-10 14:30:34.660770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efeb58 00:28:33.961 [2024-12-10 14:30:34.661615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.961 [2024-12-10 14:30:34.661635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:33.961 [2024-12-10 14:30:34.671758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee6300 00:28:33.961 [2024-12-10 14:30:34.673151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.961 [2024-12-10 14:30:34.673170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:33.961 [2024-12-10 14:30:34.678301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eee5c8 00:28:33.961 [2024-12-10 14:30:34.678974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.961 [2024-12-10 14:30:34.678993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:33.961 [2024-12-10 14:30:34.689245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef6cc8 00:28:33.961 [2024-12-10 14:30:34.690275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.961 [2024-12-10 14:30:34.690294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:33.961 [2024-12-10 14:30:34.698917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efbcf0 00:28:34.220 [2024-12-10 14:30:34.700225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.700244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.708317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee01f8 00:28:34.220 [2024-12-10 14:30:34.709595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.709614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.716259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efc128 00:28:34.220 [2024-12-10 14:30:34.717486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.717505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.723960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef6458 00:28:34.220 [2024-12-10 14:30:34.724657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.724675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.734960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efd208 00:28:34.220 [2024-12-10 14:30:34.736008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.736027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.745243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef1ca0 00:28:34.220 [2024-12-10 14:30:34.746761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.746780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.751641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eef270 00:28:34.220 [2024-12-10 14:30:34.752292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.752311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.761093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eed920 00:28:34.220 [2024-12-10 14:30:34.761912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.761932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.769667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efdeb0 00:28:34.220 [2024-12-10 14:30:34.770468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.770487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.779750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef5378 00:28:34.220 [2024-12-10 14:30:34.780679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.780702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.788152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eecc78 00:28:34.220 [2024-12-10 14:30:34.789054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.789073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.797621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efa7d8 00:28:34.220 [2024-12-10 14:30:34.798656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.798676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.807079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef2510 00:28:34.220 [2024-12-10 14:30:34.808258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.808278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.816258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef81e0 00:28:34.220 [2024-12-10 14:30:34.817415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.817435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.825104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efbcf0 00:28:34.220 [2024-12-10 14:30:34.825812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.825831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.834207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef3e60 00:28:34.220 [2024-12-10 14:30:34.835157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.835176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.844412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef3e60 00:28:34.220 [2024-12-10 14:30:34.845905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.845924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.850811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efbcf0 00:28:34.220 [2024-12-10 14:30:34.851401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.851420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.860356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef5be8 00:28:34.220 [2024-12-10 14:30:34.861283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.220 [2024-12-10 14:30:34.861302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:34.220 [2024-12-10 14:30:34.869497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef4b08 00:28:34.221 [2024-12-10 14:30:34.869959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.221 [2024-12-10 14:30:34.869979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:34.221 [2024-12-10 14:30:34.880954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eed920 00:28:34.221 [2024-12-10 14:30:34.882507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.221 [2024-12-10 14:30:34.882526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:34.221 [2024-12-10 14:30:34.887330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016edf550 00:28:34.221 [2024-12-10 14:30:34.888021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.221 [2024-12-10 14:30:34.888040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:34.221 [2024-12-10 14:30:34.896911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee5ec8 00:28:34.221 [2024-12-10 14:30:34.897897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.221 [2024-12-10 14:30:34.897916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:34.221 [2024-12-10 14:30:34.906123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efe720 00:28:34.221 [2024-12-10 14:30:34.906616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.221 [2024-12-10 14:30:34.906635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:34.221 [2024-12-10 14:30:34.915611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee4140 00:28:34.221 [2024-12-10 14:30:34.916224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.221 [2024-12-10 14:30:34.916245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:34.221 [2024-12-10 14:30:34.924683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef57b0 00:28:34.221 [2024-12-10 14:30:34.925531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.221 [2024-12-10 14:30:34.925550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:34.221 [2024-12-10 14:30:34.934279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef4b08 00:28:34.221 [2024-12-10 14:30:34.935155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.221 [2024-12-10 14:30:34.935174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:34.221 [2024-12-10 14:30:34.943656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef31b8 00:28:34.221 [2024-12-10 14:30:34.944866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.221 [2024-12-10 14:30:34.944885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:34.221 [2024-12-10 14:30:34.953138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee9168 00:28:34.221 [2024-12-10 14:30:34.954500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.221 [2024-12-10 14:30:34.954519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:34.480 [2024-12-10 14:30:34.962596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eee190 00:28:34.480 [2024-12-10 14:30:34.963956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.480 [2024-12-10 14:30:34.963975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:34.480 [2024-12-10 14:30:34.970126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee84c0 00:28:34.480 [2024-12-10 14:30:34.971125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.480 [2024-12-10 14:30:34.971144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:34.480 [2024-12-10 14:30:34.979312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef1868 00:28:34.480 [2024-12-10 14:30:34.979819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.480 [2024-12-10 14:30:34.979838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:34.480 [2024-12-10 14:30:34.988695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee4de8 00:28:34.480 [2024-12-10 14:30:34.989472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.480 [2024-12-10 14:30:34.989490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:34.480 [2024-12-10 14:30:34.996959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee8d30 00:28:34.480 [2024-12-10 14:30:34.997796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.480 [2024-12-10 14:30:34.997815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:34.480 [2024-12-10 14:30:35.006124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee5220 00:28:34.480 [2024-12-10 14:30:35.006964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.480 [2024-12-10 14:30:35.006982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:34.480 [2024-12-10 14:30:35.016255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee2c28 00:28:34.480 [2024-12-10 14:30:35.016991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.480 [2024-12-10 14:30:35.017013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:34.480 [2024-12-10 14:30:35.025064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efda78 00:28:34.480 [2024-12-10 14:30:35.026073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.480 [2024-12-10 14:30:35.026091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:34.480 [2024-12-10 14:30:35.034176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef0bc0 00:28:34.480 [2024-12-10 14:30:35.035286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.480 [2024-12-10 14:30:35.035305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:34.480 [2024-12-10 14:30:35.043406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef81e0 00:28:34.480 [2024-12-10 14:30:35.044040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.480 [2024-12-10 14:30:35.044059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:34.480 [2024-12-10 14:30:35.052873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eed4e8 00:28:34.480 [2024-12-10 14:30:35.053642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.480 [2024-12-10 14:30:35.053662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:34.480 [2024-12-10 14:30:35.061429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eff3c8 00:28:34.481 [2024-12-10 14:30:35.062702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.062721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:34.481 [2024-12-10 14:30:35.069145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efe2e8 00:28:34.481 [2024-12-10 14:30:35.069883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.069903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:34.481 [2024-12-10 14:30:35.078616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef7970 00:28:34.481 [2024-12-10 14:30:35.079490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.079509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:34.481 [2024-12-10 14:30:35.088060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef81e0 00:28:34.481 [2024-12-10 14:30:35.088961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.088981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:34.481 [2024-12-10 14:30:35.098859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef81e0 00:28:34.481 [2024-12-10 14:30:35.100321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.100340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:34.481 [2024-12-10 14:30:35.108037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efac10 00:28:34.481 [2024-12-10 14:30:35.109497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.109516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:34.481 [2024-12-10 14:30:35.114424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee27f0 00:28:34.481 [2024-12-10 14:30:35.115171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.115191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:34.481 [2024-12-10 14:30:35.125385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef2d80 00:28:34.481 [2024-12-10 14:30:35.126494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.126514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:34.481 [2024-12-10 14:30:35.134555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eec408 00:28:34.481 [2024-12-10 14:30:35.135573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.135592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:34.481 [2024-12-10 14:30:35.143862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef9f68 00:28:34.481 [2024-12-10 14:30:35.144997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.145017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:34.481 [2024-12-10 14:30:35.153500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef0350 00:28:34.481 [2024-12-10 14:30:35.154998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.155017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:34.481 [2024-12-10 14:30:35.160017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efc128 00:28:34.481 [2024-12-10 14:30:35.160793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.160811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:34.481 [2024-12-10 14:30:35.170999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efcdd0 00:28:34.481 [2024-12-10 14:30:35.172135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.172156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:34.481 [2024-12-10 14:30:35.180769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ede038 00:28:34.481 [2024-12-10 14:30:35.182010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.182029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.481 [2024-12-10 14:30:35.188160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef7da8 00:28:34.481 [2024-12-10 14:30:35.188829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.188848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:34.481 [2024-12-10 14:30:35.197457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eee190 00:28:34.481 [2024-12-10 14:30:35.198321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.198340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:34.481 [2024-12-10 14:30:35.207042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee7c50 00:28:34.481 [2024-12-10 14:30:35.208163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.208183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.481 [2024-12-10 14:30:35.216579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee84c0 00:28:34.481 [2024-12-10 14:30:35.217851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.481 [2024-12-10 14:30:35.217871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:34.739 [2024-12-10 14:30:35.226015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016edf550 00:28:34.739 [2024-12-10 14:30:35.227261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.739 [2024-12-10 14:30:35.227281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.739 [2024-12-10 14:30:35.234849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef3e60 00:28:34.739 [2024-12-10 14:30:35.235648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.739 [2024-12-10 14:30:35.235668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.739 [2024-12-10 14:30:35.244165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef4b08 00:28:34.739 [2024-12-10 14:30:35.245207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.739 [2024-12-10 14:30:35.245233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.739 [2024-12-10 14:30:35.252443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eeaef0 00:28:34.739 [2024-12-10 14:30:35.253475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.739 [2024-12-10 14:30:35.253497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.739 [2024-12-10 14:30:35.263187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eeaef0 00:28:34.739 [2024-12-10 14:30:35.264714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.739 [2024-12-10 14:30:35.264733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:34.739 [2024-12-10 14:30:35.269662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef3e60 00:28:34.739 [2024-12-10 14:30:35.270488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.739 [2024-12-10 14:30:35.270508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.739 [2024-12-10 14:30:35.280873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eff3c8 00:28:34.739 [2024-12-10 14:30:35.282155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.739 [2024-12-10 14:30:35.282174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:34.739 [2024-12-10 14:30:35.288726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eeaef0 00:28:34.739 [2024-12-10 14:30:35.289532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.739 [2024-12-10 14:30:35.289551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.739 [2024-12-10 14:30:35.298149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eeea00 00:28:34.739 [2024-12-10 14:30:35.299195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.739 [2024-12-10 14:30:35.299214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:34.739 [2024-12-10 14:30:35.308890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eeea00 00:28:34.739 [2024-12-10 14:30:35.310425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.739 [2024-12-10 14:30:35.310444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.739 [2024-12-10 14:30:35.316748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee7c50 00:28:34.739 [2024-12-10 14:30:35.317795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.739 [2024-12-10 14:30:35.317822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.739 [2024-12-10 14:30:35.325726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee5658 00:28:34.739 [2024-12-10 14:30:35.326763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.739 [2024-12-10 14:30:35.326782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:34.739 [2024-12-10 14:30:35.334114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eff3c8 00:28:34.739 [2024-12-10 14:30:35.335159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.740 [2024-12-10 14:30:35.335178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:34.740 [2024-12-10 14:30:35.343071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef4b08 00:28:34.740 [2024-12-10 14:30:35.343865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.740 [2024-12-10 14:30:35.343885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:34.740 [2024-12-10 14:30:35.354006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee38d0 00:28:34.740 [2024-12-10 14:30:35.355548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.740 [2024-12-10 14:30:35.355567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.740 [2024-12-10 14:30:35.360477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee88f8 00:28:34.740 [2024-12-10 14:30:35.361298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.740 [2024-12-10 14:30:35.361317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.740 [2024-12-10 14:30:35.370961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef2510 00:28:34.740 [2024-12-10 14:30:35.372007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.740 [2024-12-10 14:30:35.372027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:34.740 [2024-12-10 14:30:35.379660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efb048 00:28:34.740 [2024-12-10 14:30:35.380711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.740 [2024-12-10 14:30:35.380731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.740 [2024-12-10 14:30:35.388554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef0ff8 00:28:34.740 [2024-12-10 14:30:35.389373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.740 [2024-12-10 14:30:35.389391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.740 [2024-12-10 14:30:35.396838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee88f8 00:28:34.740 [2024-12-10 14:30:35.397639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.740 [2024-12-10 14:30:35.397658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.740 [2024-12-10 14:30:35.406332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efd640 00:28:34.740 [2024-12-10 14:30:35.407249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.740 [2024-12-10 14:30:35.407267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:34.740 [2024-12-10 14:30:35.417090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016efd640 00:28:34.740 [2024-12-10 14:30:35.418501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.740 [2024-12-10 14:30:35.418519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:34.740 [2024-12-10 14:30:35.423502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef8e88 00:28:34.740 [2024-12-10 14:30:35.424188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.740 [2024-12-10 14:30:35.424207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:34.740 [2024-12-10 14:30:35.432933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef31b8 00:28:34.740 [2024-12-10 14:30:35.433639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.740 [2024-12-10 14:30:35.433658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:34.740 [2024-12-10 14:30:35.443868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016edfdc0 00:28:34.740 [2024-12-10 14:30:35.445041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.740 [2024-12-10 14:30:35.445060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:34.740 [2024-12-10 14:30:35.452370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef4298 00:28:34.740 [2024-12-10 14:30:35.453236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.740 [2024-12-10 14:30:35.453256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:34.740 [2024-12-10 14:30:35.462030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eeee38 00:28:34.740 [2024-12-10 14:30:35.462980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.740 [2024-12-10 14:30:35.462999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:34.740 [2024-12-10 14:30:35.471038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee49b0 00:28:34.740 [2024-12-10 14:30:35.471988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.740 [2024-12-10 14:30:35.472008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.480329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef6458 00:28:34.999 [2024-12-10 14:30:35.481318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.481337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.488700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef81e0 00:28:34.999 [2024-12-10 14:30:35.489693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.489712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.498167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee3d08 00:28:34.999 [2024-12-10 14:30:35.499282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.499301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.507634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee0a68 00:28:34.999 [2024-12-10 14:30:35.508884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.508904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.516136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee4578 00:28:34.999 [2024-12-10 14:30:35.517073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.517092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.525095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef7100 00:28:34.999 [2024-12-10 14:30:35.525932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.525951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.534544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef0ff8 00:28:34.999 [2024-12-10 14:30:35.535699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.535718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.543116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eeee38 00:28:34.999 [2024-12-10 14:30:35.543943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.543962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.552124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eec408 00:28:34.999 [2024-12-10 14:30:35.553050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.553069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.561594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee5220 00:28:34.999 [2024-12-10 14:30:35.562656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.562675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.570771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eee190 00:28:34.999 [2024-12-10 14:30:35.571391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.571414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.580243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee6300 00:28:34.999 [2024-12-10 14:30:35.580972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.580992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.588501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee73e0 00:28:34.999 [2024-12-10 14:30:35.589308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.589328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.597335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016eec408 00:28:34.999 [2024-12-10 14:30:35.597956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.597974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.606325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ef1430 00:28:34.999 [2024-12-10 14:30:35.606933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.606952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.615261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee9e10 00:28:34.999 [2024-12-10 14:30:35.615952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.615971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.623937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee6738 00:28:34.999 [2024-12-10 14:30:35.624614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.624634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.999 [2024-12-10 14:30:35.634898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d86c0) with pdu=0x200016ee3d08 00:28:34.999 [2024-12-10 14:30:35.636155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.999 [2024-12-10 14:30:35.636176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:34.999 28021.50 IOPS, 109.46 MiB/s 00:28:34.999 Latency(us) 00:28:34.999 [2024-12-10T13:30:35.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.999 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:34.999 nvme0n1 : 2.01 28038.44 109.53 0.00 0.00 4559.20 2231.34 12420.63 00:28:34.999 [2024-12-10T13:30:35.739Z] =================================================================================================================== 00:28:34.999 [2024-12-10T13:30:35.739Z] Total : 28038.44 109.53 0.00 0.00 4559.20 2231.34 12420.63 00:28:34.999 { 00:28:34.999 "results": [ 00:28:34.999 { 00:28:34.999 "job": "nvme0n1", 00:28:34.999 "core_mask": "0x2", 00:28:34.999 "workload": "randwrite", 00:28:34.999 "status": "finished", 00:28:34.999 "queue_depth": 128, 00:28:34.999 "io_size": 4096, 00:28:34.999 "runtime": 2.00621, 00:28:34.999 "iops": 28038.44064180719, 00:28:34.999 "mibps": 109.52515875705933, 00:28:34.999 "io_failed": 0, 00:28:34.999 "io_timeout": 0, 00:28:34.999 "avg_latency_us": 4559.19646848183, 00:28:34.999 "min_latency_us": 2231.344761904762, 00:28:34.999 "max_latency_us": 12420.63238095238 00:28:34.999 } 00:28:34.999 ], 00:28:34.999 "core_count": 1 00:28:34.999 } 00:28:34.999 14:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:34.999 14:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:35.000 14:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:35.000 | .driver_specific 00:28:35.000 | .nvme_error 00:28:35.000 | .status_code 00:28:35.000 | .command_transient_transport_error' 00:28:35.000 14:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:35.258 14:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:28:35.258 14:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1803775 00:28:35.258 14:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1803775 ']' 00:28:35.258 14:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1803775 00:28:35.258 14:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:35.258 14:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.258 14:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1803775 00:28:35.258 14:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:35.258 14:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:35.258 14:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1803775' 00:28:35.258 killing process with pid 1803775 00:28:35.258 14:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1803775 00:28:35.258 Received shutdown signal, test time was about 2.000000 seconds 00:28:35.258 00:28:35.258 Latency(us) 00:28:35.258 [2024-12-10T13:30:35.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.258 [2024-12-10T13:30:35.998Z] =================================================================================================================== 00:28:35.258 [2024-12-10T13:30:35.998Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:35.258 14:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1803775 00:28:35.517 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:35.517 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:35.517 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:35.517 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:35.517 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:35.517 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1804261 00:28:35.517 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1804261 /var/tmp/bperf.sock 00:28:35.517 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:35.517 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1804261 ']' 00:28:35.517 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:35.517 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:35.517 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:35.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:35.517 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:35.517 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:35.517 [2024-12-10 14:30:36.120264] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:28:35.517 [2024-12-10 14:30:36.120312] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1804261 ] 00:28:35.517 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:35.517 Zero copy mechanism will not be used. 00:28:35.517 [2024-12-10 14:30:36.202459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.517 [2024-12-10 14:30:36.238971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.775 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.775 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:35.775 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:35.775 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:36.034 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:36.034 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.034 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.034 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.034 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.034 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.293 nvme0n1 00:28:36.293 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:36.293 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.293 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.293 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.293 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:36.293 14:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:36.293 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:36.293 Zero copy mechanism will not be used. 00:28:36.293 Running I/O for 2 seconds... 00:28:36.293 [2024-12-10 14:30:37.002138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.293 [2024-12-10 14:30:37.002208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.293 [2024-12-10 14:30:37.002245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.293 [2024-12-10 14:30:37.006614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.293 [2024-12-10 14:30:37.006681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.293 [2024-12-10 14:30:37.006702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.293 [2024-12-10 14:30:37.011174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.293 [2024-12-10 14:30:37.011245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.293 [2024-12-10 14:30:37.011264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.293 [2024-12-10 14:30:37.015447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.293 [2024-12-10 14:30:37.015513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.293 [2024-12-10 14:30:37.015532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.293 [2024-12-10 14:30:37.019689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.293 [2024-12-10 14:30:37.019756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.293 [2024-12-10 14:30:37.019775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.293 [2024-12-10 14:30:37.023871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.293 [2024-12-10 14:30:37.023937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.293 [2024-12-10 14:30:37.023956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.293 [2024-12-10 14:30:37.028036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.293 [2024-12-10 14:30:37.028104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.293 [2024-12-10 14:30:37.028124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.032310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.032370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.032389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.036524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.036629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.036651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.040655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.040711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.040730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.044749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.044802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.044820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.048852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.048915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.048933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.052971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.053035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.053055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.057048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.057103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.057122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.061120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.061174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.061193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.065188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.065252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.065271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.069272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.069336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.069354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.073357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.073413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.073431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.077462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.077528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.077547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.081674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.081781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.081800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.086549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.086626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.086645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.092500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.092686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.092704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.097807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.097897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.097916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.103659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.103826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.103845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.109803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.109964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.109982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.116250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.116427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.116446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.122626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.122794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.122813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.129100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.129277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.129298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.136577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.552 [2024-12-10 14:30:37.136711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.552 [2024-12-10 14:30:37.136732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.552 [2024-12-10 14:30:37.144154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.144332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.144353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.151379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.151502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.151523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.158441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.158588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.158609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.165568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.165879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.165900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.172085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.172430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.172451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.179213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.179574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.179598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.185941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.186298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.186317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.192227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.192595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.192615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.197376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.197661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.197681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.201545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.201823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.201843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.205699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.205965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.205985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.209759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.210035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.210055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.213852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.214128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.214148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.217952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.218240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.218260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.222001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.222285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.222305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.226032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.226322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.226342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.230083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.230363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.230383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.234411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.234698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.234718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.238558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.238825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.238845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.242557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.242817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.242837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.246545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.246814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.246834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.250503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.250762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.250782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.254497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.254762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.254782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.258443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.258679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.258699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.262330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.262555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.262576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.266181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.266426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.266446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.270076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.270310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.270328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.273915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.553 [2024-12-10 14:30:37.274150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.553 [2024-12-10 14:30:37.274170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.553 [2024-12-10 14:30:37.277730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.554 [2024-12-10 14:30:37.277961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.554 [2024-12-10 14:30:37.277981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.554 [2024-12-10 14:30:37.281411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.554 [2024-12-10 14:30:37.281617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.554 [2024-12-10 14:30:37.281637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.554 [2024-12-10 14:30:37.284984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.554 [2024-12-10 14:30:37.285184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.554 [2024-12-10 14:30:37.285202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.554 [2024-12-10 14:30:37.288659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.554 [2024-12-10 14:30:37.288879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.554 [2024-12-10 14:30:37.288905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.292289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.813 [2024-12-10 14:30:37.292511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.813 [2024-12-10 14:30:37.292530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.295959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.813 [2024-12-10 14:30:37.296169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.813 [2024-12-10 14:30:37.296189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.299553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.813 [2024-12-10 14:30:37.299762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.813 [2024-12-10 14:30:37.299781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.303154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.813 [2024-12-10 14:30:37.303385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.813 [2024-12-10 14:30:37.303404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.306744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.813 [2024-12-10 14:30:37.306970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.813 [2024-12-10 14:30:37.306990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.310314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.813 [2024-12-10 14:30:37.310516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.813 [2024-12-10 14:30:37.310533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.314233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.813 [2024-12-10 14:30:37.314483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.813 [2024-12-10 14:30:37.314504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.319446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.813 [2024-12-10 14:30:37.319729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.813 [2024-12-10 14:30:37.319748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.324288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.813 [2024-12-10 14:30:37.324502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.813 [2024-12-10 14:30:37.324522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.329038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.813 [2024-12-10 14:30:37.329256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.813 [2024-12-10 14:30:37.329276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.334422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.813 [2024-12-10 14:30:37.334741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.813 [2024-12-10 14:30:37.334761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.339635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.813 [2024-12-10 14:30:37.339963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.813 [2024-12-10 14:30:37.339982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.344947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.813 [2024-12-10 14:30:37.345293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.813 [2024-12-10 14:30:37.345314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.350608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.813 [2024-12-10 14:30:37.350812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.813 [2024-12-10 14:30:37.350833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.356864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.813 [2024-12-10 14:30:37.357162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.813 [2024-12-10 14:30:37.357183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.363418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.813 [2024-12-10 14:30:37.363760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.813 [2024-12-10 14:30:37.363780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.369561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.813 [2024-12-10 14:30:37.369778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.813 [2024-12-10 14:30:37.369799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.813 [2024-12-10 14:30:37.376424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.376741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.376761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.382763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.383082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.383101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.388840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.389042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.389060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.395009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.395325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.395345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.401346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.401623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.401643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.407267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.407451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.407469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.413826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.414088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.414109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.419906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.420151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.420171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.424652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.424857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.424881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.428477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.428685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.428705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.432322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.432530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.432551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.436114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.436323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.436342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.440410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.440613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.440632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.446132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.446455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.446476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.451719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.452047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.452068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.457206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.457526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.457547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.462801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.463103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.463123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.468446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.468743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.468764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.474034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.474337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.474358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.479643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.479922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.479943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.485299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.485515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.485535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.490924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.491140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.491160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.496568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.496879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.496900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.501768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.502087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.502108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.507068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.507299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.507318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.512286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.512637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.512657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.517509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.517842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.517862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.522860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.814 [2024-12-10 14:30:37.523170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.814 [2024-12-10 14:30:37.523190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.814 [2024-12-10 14:30:37.528178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.815 [2024-12-10 14:30:37.528408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.815 [2024-12-10 14:30:37.528429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.815 [2024-12-10 14:30:37.533274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.815 [2024-12-10 14:30:37.533476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.815 [2024-12-10 14:30:37.533497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.815 [2024-12-10 14:30:37.538300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.815 [2024-12-10 14:30:37.538525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.815 [2024-12-10 14:30:37.538546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.815 [2024-12-10 14:30:37.543395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.815 [2024-12-10 14:30:37.543595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.815 [2024-12-10 14:30:37.543614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.815 [2024-12-10 14:30:37.548423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:36.815 [2024-12-10 14:30:37.548660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.815 [2024-12-10 14:30:37.548681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.553582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.553879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.553900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.558536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.558741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.558764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.563731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.564038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.564060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.568938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.569248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.569269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.574272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.574576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.574596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.580135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.580414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.580435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.585261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.585489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.585509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.589915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.590096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.590116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.595795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.595972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.595992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.600013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.600194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.600212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.603961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.604145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.604163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.607848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.608021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.608039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.611632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.611803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.611821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.615445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.615608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.615628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.619328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.619489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.619508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.623581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.623742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.623760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.628596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.628760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.628779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.632784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.632952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.632971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.636714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.636882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.636902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.640547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.640705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.640725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.644405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.644582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.644602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.648245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.648404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.648423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.652148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.652315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.652334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.656047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.656212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.656238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.659767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.659938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.659957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.663642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.663807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.663825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.668345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.668517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.668538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.672784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.672952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.672973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.676603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.676762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.676781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.680405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.680569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.680588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.684268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.684434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.684452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.688050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.688211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.688235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.692065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.692243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.692261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.696159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.696330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.696349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.699926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.075 [2024-12-10 14:30:37.700099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-12-10 14:30:37.700117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.075 [2024-12-10 14:30:37.703585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.703757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.703776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.707387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.707563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.707581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.711748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.711904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.711923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.716134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.716319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.716339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.720069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.720234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.720253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.723924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.724086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.724104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.727802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.727971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.727989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.731609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.731768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.731786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.735502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.735665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.735685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.739338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.739517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.739537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.743040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.743205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.743231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.747191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.747357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.747376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.751761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.751926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.751944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.755910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.756074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.756093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.759948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.760130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.760148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.764609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.764767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.764785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.769433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.769608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.769628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.773823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.773995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.774016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.777794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.777960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.777984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.781528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.781699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.781717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.785252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.785425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.785445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.789087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.789269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.789288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.792849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.793023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.793042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.796652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.796820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.796840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.800423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.800583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.800602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.804043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.804201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.804224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.807825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.808003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.076 [2024-12-10 14:30:37.808021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.076 [2024-12-10 14:30:37.812310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.076 [2024-12-10 14:30:37.812477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.077 [2024-12-10 14:30:37.812499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.816930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.817107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.817126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.820844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.821014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.821033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.824756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.824922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.824941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.828596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.828766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.828785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.832400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.832556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.832576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.836558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.836722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.836742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.840932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.841091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.841110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.845152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.845337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.845356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.848909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.849083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.849101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.852663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.852820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.852839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.856436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.856613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.856634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.860463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.860621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.860639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.864165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.864330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.864349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.868167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.868350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.868370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.872101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.872278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.872296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.875842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.876016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.876035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.879542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.879715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.879737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.883365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.883542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.883562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.887672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.887844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.887864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.891839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.892002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.892020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.895884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.896041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.896059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.900323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.900487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.900505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.904589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.904751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.904771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.908717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.908875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.908894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.912612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.912815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.337 [2024-12-10 14:30:37.912836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.337 [2024-12-10 14:30:37.917857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.337 [2024-12-10 14:30:37.918067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.918088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.923142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.923413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.923434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.929068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.929266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.929284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.935496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.935776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.935797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.941182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.941361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.941380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.945799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.945960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.945978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.949611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.949782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.949802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.953449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.953617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.953637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.957274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.957444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.957463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.961094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.961273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.961291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.964932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.965088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.965106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.968854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.969024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.969042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.972680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.972843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.972864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.976484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.976661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.976681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.980204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.980370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.980389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.984044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.984228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.984246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.987692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.987862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.987881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.991423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.991586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.991608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:37.995991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:37.996146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:37.996166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.338 6736.00 IOPS, 842.00 MiB/s [2024-12-10T13:30:38.078Z] [2024-12-10 14:30:38.001246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:38.001455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:38.001476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:38.005079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:38.005257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:38.005275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:38.008954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:38.009128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:38.009149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:38.012902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:38.013076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:38.013097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:38.016767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:38.016948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:38.016970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:38.020573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:38.020756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:38.020777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:38.024476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:38.024652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:38.024671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:38.028823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:38.029014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:38.029033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:38.033274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.338 [2024-12-10 14:30:38.033454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.338 [2024-12-10 14:30:38.033475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.338 [2024-12-10 14:30:38.037379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.339 [2024-12-10 14:30:38.037570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.339 [2024-12-10 14:30:38.037590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.339 [2024-12-10 14:30:38.041193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.339 [2024-12-10 14:30:38.041386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.339 [2024-12-10 14:30:38.041405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.339 [2024-12-10 14:30:38.044956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.339 [2024-12-10 14:30:38.045143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.339 [2024-12-10 14:30:38.045161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.339 [2024-12-10 14:30:38.048783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.339 [2024-12-10 14:30:38.048975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.339 [2024-12-10 14:30:38.048993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.339 [2024-12-10 14:30:38.052656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.339 [2024-12-10 14:30:38.052851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.339 [2024-12-10 14:30:38.052871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.339 [2024-12-10 14:30:38.056774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.339 [2024-12-10 14:30:38.056961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.339 [2024-12-10 14:30:38.056981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.339 [2024-12-10 14:30:38.061568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.339 [2024-12-10 14:30:38.061752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.339 [2024-12-10 14:30:38.061771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.339 [2024-12-10 14:30:38.065732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.339 [2024-12-10 14:30:38.065908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.339 [2024-12-10 14:30:38.065928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.339 [2024-12-10 14:30:38.069438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.339 [2024-12-10 14:30:38.069624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.339 [2024-12-10 14:30:38.069645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.339 [2024-12-10 14:30:38.073198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.339 [2024-12-10 14:30:38.073398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.339 [2024-12-10 14:30:38.073417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.076976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.077160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.077179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.081285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.081537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.081559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.086887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.087152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.087173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.091902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.092092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.092111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.097586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.097899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.097920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.103486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.103690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.103714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.108962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.109216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.109245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.115558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.115824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.115846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.121925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.122127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.122146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.128381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.128569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.128589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.134608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.134810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.134831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.141442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.141727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.141748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.148352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.148542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.148564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.155564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.155674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.155693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.162983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.163096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.163116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.169284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.169438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.169458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.176494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.176652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.176673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.183589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.183759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.183779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.191388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.191565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.191584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.198232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.198335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.198353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.204500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.204691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.204710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.211204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.211339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.211358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.218472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.218627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.218645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.224573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.224716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.224736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.231423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.231519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.231537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.238585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.238762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.238780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.244169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.244268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.244287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.248625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.248701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.248720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.252587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.252640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.252659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.256425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.256497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.256516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.599 [2024-12-10 14:30:38.260133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.599 [2024-12-10 14:30:38.260194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.599 [2024-12-10 14:30:38.260213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.263910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.263964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.263987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.267730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.267793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.267812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.271533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.271597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.271616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.275338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.275395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.275414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.279099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.279167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.279186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.282876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.282934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.282952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.286592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.286659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.286679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.290281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.290335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.290354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.293946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.294005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.294024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.297627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.297691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.297710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.301323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.301377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.301395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.304967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.305039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.305058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.308665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.308726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.308744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.312316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.312386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.312405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.316003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.316063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.316082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.319697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.319766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.319786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.323375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.323436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.323455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.327081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.327147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.327166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.330767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.330834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.330853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.600 [2024-12-10 14:30:38.334501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.600 [2024-12-10 14:30:38.334561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.600 [2024-12-10 14:30:38.334580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.338230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.338293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.338312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.341991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.342054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.342073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.345694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.345755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.345773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.349363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.349416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.349435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.353045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.353102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.353120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.356752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.356811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.356830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.360415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.360477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.360499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.364095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.364155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.364173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.367771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.367830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.367848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.371420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.371474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.371493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.375114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.375167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.375185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.378790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.378841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.378860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.382490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.382543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.382561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.386237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.386301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.386320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.390681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.390777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.390795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.395441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.395515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.395534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.400341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.400396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.400415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.405386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.405458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.405478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.410098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.410222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.860 [2024-12-10 14:30:38.410242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.860 [2024-12-10 14:30:38.415344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.860 [2024-12-10 14:30:38.415418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.415437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.422022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.422229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.422248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.428569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.428636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.428655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.435182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.435361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.435380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.441916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.442084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.442103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.448983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.449104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.449124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.456036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.456156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.456175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.463021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.463129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.463148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.470611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.470792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.470810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.476950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.477157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.477177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.482347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.482493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.482513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.487543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.487655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.487673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.491674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.491729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.491749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.495791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.495856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.495878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.500076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.500167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.500186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.504208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.504318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.504337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.508259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.508354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.508373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.512169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.512255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.512274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.515904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.515989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.516007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.520223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.520396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.520416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.525668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.525873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.525892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.530804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.530952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.530971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.536084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.536274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.536293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.541411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.541582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.541599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.546585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.546792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.546812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.551763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.551943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.551962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.556980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.557151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.557169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.562323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.562522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.861 [2024-12-10 14:30:38.562550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.861 [2024-12-10 14:30:38.568396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.861 [2024-12-10 14:30:38.568488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.862 [2024-12-10 14:30:38.568505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.862 [2024-12-10 14:30:38.574790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.862 [2024-12-10 14:30:38.574969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.862 [2024-12-10 14:30:38.574988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.862 [2024-12-10 14:30:38.581398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.862 [2024-12-10 14:30:38.581488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.862 [2024-12-10 14:30:38.581506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.862 [2024-12-10 14:30:38.587113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.862 [2024-12-10 14:30:38.587183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.862 [2024-12-10 14:30:38.587202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.862 [2024-12-10 14:30:38.591658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.862 [2024-12-10 14:30:38.591728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.862 [2024-12-10 14:30:38.591746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.862 [2024-12-10 14:30:38.596124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:37.862 [2024-12-10 14:30:38.596207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.862 [2024-12-10 14:30:38.596233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.121 [2024-12-10 14:30:38.601508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.121 [2024-12-10 14:30:38.601613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.121 [2024-12-10 14:30:38.601631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.121 [2024-12-10 14:30:38.606248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.121 [2024-12-10 14:30:38.606317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.121 [2024-12-10 14:30:38.606335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.121 [2024-12-10 14:30:38.610656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.121 [2024-12-10 14:30:38.610725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.121 [2024-12-10 14:30:38.610743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.121 [2024-12-10 14:30:38.614830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.121 [2024-12-10 14:30:38.614913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.121 [2024-12-10 14:30:38.614932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.121 [2024-12-10 14:30:38.619832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.121 [2024-12-10 14:30:38.619916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.619935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.624288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.624357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.624380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.628955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.629033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.629051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.633294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.633366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.633385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.637716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.637785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.637803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.642014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.642085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.642103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.646279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.646351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.646369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.650820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.650893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.650912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.655021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.655088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.655107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.659698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.659764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.659782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.664366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.664438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.664456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.668617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.668694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.668712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.673090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.673163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.673182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.677001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.677069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.677088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.681399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.681466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.681484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.685643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.685710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.685728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.690024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.690091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.690109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.694626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.694695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.694713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.698938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.699005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.699023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.703063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.703135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.703154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.706880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.706949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.706968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.710570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.710635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.710653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.714478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.714545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.714563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.718274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.718341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.718359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.722148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.722221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.722240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.725952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.726020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.726038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.729747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.729831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.729849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.734283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.122 [2024-12-10 14:30:38.734401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.122 [2024-12-10 14:30:38.734423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.122 [2024-12-10 14:30:38.739292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.739455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.739473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.745401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.745559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.745577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.751481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.751578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.751596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.758005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.758103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.758122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.764982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.765141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.765161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.771483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.771613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.771631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.778705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.778881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.778899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.785959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.786065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.786083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.792493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.792662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.792680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.799525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.799629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.799647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.806411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.806626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.806654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.813149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.813288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.813307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.820032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.820195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.820213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.826721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.826797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.826815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.833288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.833469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.833486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.839703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.839879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.839897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.846323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.846463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.846481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.852665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.852816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.852834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.123 [2024-12-10 14:30:38.859328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.123 [2024-12-10 14:30:38.859432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.123 [2024-12-10 14:30:38.859451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.382 [2024-12-10 14:30:38.866258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.382 [2024-12-10 14:30:38.866430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.382 [2024-12-10 14:30:38.866450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.382 [2024-12-10 14:30:38.872975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.382 [2024-12-10 14:30:38.873162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.382 [2024-12-10 14:30:38.873180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.382 [2024-12-10 14:30:38.880325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.382 [2024-12-10 14:30:38.880466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.382 [2024-12-10 14:30:38.880484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.382 [2024-12-10 14:30:38.887518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.382 [2024-12-10 14:30:38.887633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.382 [2024-12-10 14:30:38.887652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.382 [2024-12-10 14:30:38.894531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.382 [2024-12-10 14:30:38.894651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.382 [2024-12-10 14:30:38.894668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.382 [2024-12-10 14:30:38.901233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.382 [2024-12-10 14:30:38.901395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.382 [2024-12-10 14:30:38.901414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.382 [2024-12-10 14:30:38.907973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.382 [2024-12-10 14:30:38.908153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.382 [2024-12-10 14:30:38.908176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.382 [2024-12-10 14:30:38.914508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.382 [2024-12-10 14:30:38.914621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.382 [2024-12-10 14:30:38.914640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.382 [2024-12-10 14:30:38.921582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.382 [2024-12-10 14:30:38.921698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.382 [2024-12-10 14:30:38.921716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.382 [2024-12-10 14:30:38.928505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.382 [2024-12-10 14:30:38.928640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.382 [2024-12-10 14:30:38.928659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.383 [2024-12-10 14:30:38.934943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.383 [2024-12-10 14:30:38.935115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.383 [2024-12-10 14:30:38.935133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.383 [2024-12-10 14:30:38.941779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.383 [2024-12-10 14:30:38.941951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.383 [2024-12-10 14:30:38.941969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.383 [2024-12-10 14:30:38.948827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.383 [2024-12-10 14:30:38.948969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.383 [2024-12-10 14:30:38.948988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.383 [2024-12-10 14:30:38.955574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.383 [2024-12-10 14:30:38.955734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.383 [2024-12-10 14:30:38.955752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.383 [2024-12-10 14:30:38.962260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.383 [2024-12-10 14:30:38.962454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.383 [2024-12-10 14:30:38.962473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.383 [2024-12-10 14:30:38.969125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.383 [2024-12-10 14:30:38.969278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.383 [2024-12-10 14:30:38.969296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.383 [2024-12-10 14:30:38.976163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.383 [2024-12-10 14:30:38.976360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.383 [2024-12-10 14:30:38.976379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.383 [2024-12-10 14:30:38.983137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.383 [2024-12-10 14:30:38.983314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.383 [2024-12-10 14:30:38.983332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:38.383 [2024-12-10 14:30:38.989836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.383 [2024-12-10 14:30:38.990044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.383 [2024-12-10 14:30:38.990064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:38.383 [2024-12-10 14:30:38.996580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.383 [2024-12-10 14:30:38.996678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.383 [2024-12-10 14:30:38.996697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:38.383 6378.50 IOPS, 797.31 MiB/s [2024-12-10T13:30:39.123Z] [2024-12-10 14:30:39.003823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d8a00) with pdu=0x200016eff3c8 00:28:38.383 [2024-12-10 14:30:39.003980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.383 [2024-12-10 14:30:39.003999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:38.383 00:28:38.383 Latency(us) 00:28:38.383 [2024-12-10T13:30:39.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.383 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:38.383 nvme0n1 : 2.00 6372.71 796.59 0.00 0.00 2505.30 1458.96 9175.04 00:28:38.383 [2024-12-10T13:30:39.123Z] =================================================================================================================== 00:28:38.383 [2024-12-10T13:30:39.123Z] Total : 6372.71 796.59 0.00 0.00 2505.30 1458.96 9175.04 00:28:38.383 { 00:28:38.383 "results": [ 00:28:38.383 { 00:28:38.383 "job": "nvme0n1", 00:28:38.383 "core_mask": "0x2", 00:28:38.383 "workload": "randwrite", 00:28:38.383 "status": "finished", 00:28:38.383 "queue_depth": 16, 00:28:38.383 "io_size": 131072, 00:28:38.383 "runtime": 2.004329, 00:28:38.383 "iops": 6372.706277262864, 00:28:38.383 "mibps": 796.588284657858, 00:28:38.383 "io_failed": 0, 00:28:38.383 "io_timeout": 0, 00:28:38.383 "avg_latency_us": 2505.2952738850177, 00:28:38.383 "min_latency_us": 1458.9561904761904, 00:28:38.383 "max_latency_us": 9175.04 00:28:38.383 } 00:28:38.383 ], 00:28:38.383 "core_count": 1 00:28:38.383 } 00:28:38.383 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:38.383 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:38.383 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:38.383 | .driver_specific 00:28:38.383 | .nvme_error 00:28:38.383 | .status_code 00:28:38.383 | .command_transient_transport_error' 00:28:38.383 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:38.642 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 413 > 0 )) 00:28:38.642 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1804261 00:28:38.642 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1804261 ']' 00:28:38.642 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1804261 00:28:38.642 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:38.642 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:38.642 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1804261 00:28:38.642 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:38.642 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:38.642 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1804261' 00:28:38.642 killing process with pid 1804261 00:28:38.642 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1804261 00:28:38.642 Received shutdown signal, test time was about 2.000000 seconds 00:28:38.642 00:28:38.642 Latency(us) 00:28:38.642 [2024-12-10T13:30:39.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.642 [2024-12-10T13:30:39.382Z] =================================================================================================================== 00:28:38.642 [2024-12-10T13:30:39.382Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:38.642 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1804261 00:28:38.900 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1802409 00:28:38.900 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1802409 ']' 00:28:38.900 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1802409 00:28:38.900 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:38.900 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:38.900 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1802409 00:28:38.900 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:38.900 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:38.900 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1802409' 00:28:38.900 killing process with pid 1802409 00:28:38.900 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1802409 00:28:38.900 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1802409 00:28:39.159 00:28:39.159 real 0m14.756s 00:28:39.159 user 0m27.788s 00:28:39.159 sys 0m4.588s 00:28:39.159 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:39.159 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.159 ************************************ 00:28:39.159 END TEST nvmf_digest_error 00:28:39.159 ************************************ 00:28:39.159 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:39.159 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:39.159 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:39.159 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:39.159 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:39.159 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:39.159 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:39.159 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:39.159 rmmod nvme_tcp 00:28:39.159 rmmod nvme_fabrics 00:28:39.159 rmmod nvme_keyring 00:28:39.159 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:39.159 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:39.159 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:39.159 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1802409 ']' 00:28:39.159 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1802409 00:28:39.159 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1802409 ']' 00:28:39.159 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1802409 00:28:39.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1802409) - No such process 00:28:39.160 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1802409 is not found' 00:28:39.160 Process with pid 1802409 is not found 00:28:39.160 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:39.160 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:39.160 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:39.160 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:39.160 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:39.160 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:39.160 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:39.160 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:39.160 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:39.160 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.160 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.160 14:30:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.694 14:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:41.694 00:28:41.694 real 0m38.684s 00:28:41.694 user 0m57.455s 00:28:41.694 sys 0m14.417s 00:28:41.694 14:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:41.694 14:30:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.694 ************************************ 00:28:41.694 END TEST nvmf_digest 00:28:41.694 ************************************ 00:28:41.694 14:30:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:41.694 14:30:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:41.694 14:30:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:41.694 14:30:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:41.694 14:30:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:41.694 14:30:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:41.694 14:30:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.694 ************************************ 00:28:41.694 START TEST nvmf_bdevperf 00:28:41.694 ************************************ 00:28:41.694 14:30:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:41.694 * Looking for test storage... 00:28:41.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:41.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.694 --rc genhtml_branch_coverage=1 00:28:41.694 --rc genhtml_function_coverage=1 00:28:41.694 --rc genhtml_legend=1 00:28:41.694 --rc geninfo_all_blocks=1 00:28:41.694 --rc geninfo_unexecuted_blocks=1 00:28:41.694 00:28:41.694 ' 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:41.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.694 --rc genhtml_branch_coverage=1 00:28:41.694 --rc genhtml_function_coverage=1 00:28:41.694 --rc genhtml_legend=1 00:28:41.694 --rc geninfo_all_blocks=1 00:28:41.694 --rc geninfo_unexecuted_blocks=1 00:28:41.694 00:28:41.694 ' 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:41.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.694 --rc genhtml_branch_coverage=1 00:28:41.694 --rc genhtml_function_coverage=1 00:28:41.694 --rc genhtml_legend=1 00:28:41.694 --rc geninfo_all_blocks=1 00:28:41.694 --rc geninfo_unexecuted_blocks=1 00:28:41.694 00:28:41.694 ' 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:41.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.694 --rc genhtml_branch_coverage=1 00:28:41.694 --rc genhtml_function_coverage=1 00:28:41.694 --rc genhtml_legend=1 00:28:41.694 --rc geninfo_all_blocks=1 00:28:41.694 --rc geninfo_unexecuted_blocks=1 00:28:41.694 00:28:41.694 ' 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.694 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:41.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:41.695 14:30:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:48.394 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:48.394 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.394 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:48.395 Found net devices under 0000:af:00.0: cvl_0_0 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:48.395 Found net devices under 0000:af:00.1: cvl_0_1 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:48.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:28:48.395 00:28:48.395 --- 10.0.0.2 ping statistics --- 00:28:48.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.395 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:28:48.395 00:28:48.395 --- 10.0.0.1 ping statistics --- 00:28:48.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.395 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1808766 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1808766 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1808766 ']' 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.395 14:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.395 [2024-12-10 14:30:48.943300] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:28:48.395 [2024-12-10 14:30:48.943345] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.395 [2024-12-10 14:30:49.024645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:48.395 [2024-12-10 14:30:49.066753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.395 [2024-12-10 14:30:49.066788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.395 [2024-12-10 14:30:49.066795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.395 [2024-12-10 14:30:49.066801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.395 [2024-12-10 14:30:49.066807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.395 [2024-12-10 14:30:49.068185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.395 [2024-12-10 14:30:49.068290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.395 [2024-12-10 14:30:49.068291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:49.331 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.331 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:49.331 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:49.331 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:49.331 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.331 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.331 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:49.331 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.331 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.331 [2024-12-10 14:30:49.831175] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.331 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.331 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:49.331 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.331 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.331 Malloc0 00:28:49.331 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.331 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:49.331 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.331 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.332 [2024-12-10 14:30:49.893132] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:49.332 { 00:28:49.332 "params": { 00:28:49.332 "name": "Nvme$subsystem", 00:28:49.332 "trtype": "$TEST_TRANSPORT", 00:28:49.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.332 "adrfam": "ipv4", 00:28:49.332 "trsvcid": "$NVMF_PORT", 00:28:49.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.332 "hdgst": ${hdgst:-false}, 00:28:49.332 "ddgst": ${ddgst:-false} 00:28:49.332 }, 00:28:49.332 "method": "bdev_nvme_attach_controller" 00:28:49.332 } 00:28:49.332 EOF 00:28:49.332 )") 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:49.332 14:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:49.332 "params": { 00:28:49.332 "name": "Nvme1", 00:28:49.332 "trtype": "tcp", 00:28:49.332 "traddr": "10.0.0.2", 00:28:49.332 "adrfam": "ipv4", 00:28:49.332 "trsvcid": "4420", 00:28:49.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:49.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:49.332 "hdgst": false, 00:28:49.332 "ddgst": false 00:28:49.332 }, 00:28:49.332 "method": "bdev_nvme_attach_controller" 00:28:49.332 }' 00:28:49.332 [2024-12-10 14:30:49.945892] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:28:49.332 [2024-12-10 14:30:49.945935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1808972 ] 00:28:49.332 [2024-12-10 14:30:50.024713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.591 [2024-12-10 14:30:50.070713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.849 Running I/O for 1 seconds... 00:28:50.784 11362.00 IOPS, 44.38 MiB/s 00:28:50.785 Latency(us) 00:28:50.785 [2024-12-10T13:30:51.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.785 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:50.785 Verification LBA range: start 0x0 length 0x4000 00:28:50.785 Nvme1n1 : 1.01 11433.10 44.66 0.00 0.00 11141.01 2075.31 13356.86 00:28:50.785 [2024-12-10T13:30:51.525Z] =================================================================================================================== 00:28:50.785 [2024-12-10T13:30:51.525Z] Total : 11433.10 44.66 0.00 0.00 11141.01 2075.31 13356.86 00:28:50.785 14:30:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1809248 00:28:50.785 14:30:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:50.785 14:30:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:50.785 14:30:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:51.044 14:30:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:51.044 14:30:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:51.044 14:30:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.044 14:30:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.044 { 00:28:51.044 "params": { 00:28:51.044 "name": "Nvme$subsystem", 00:28:51.044 "trtype": "$TEST_TRANSPORT", 00:28:51.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.044 "adrfam": "ipv4", 00:28:51.044 "trsvcid": "$NVMF_PORT", 00:28:51.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.044 "hdgst": ${hdgst:-false}, 00:28:51.044 "ddgst": ${ddgst:-false} 00:28:51.044 }, 00:28:51.044 "method": "bdev_nvme_attach_controller" 00:28:51.044 } 00:28:51.044 EOF 00:28:51.044 )") 00:28:51.044 14:30:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:51.044 14:30:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:51.044 14:30:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:51.044 14:30:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:51.044 "params": { 00:28:51.044 "name": "Nvme1", 00:28:51.044 "trtype": "tcp", 00:28:51.044 "traddr": "10.0.0.2", 00:28:51.044 "adrfam": "ipv4", 00:28:51.044 "trsvcid": "4420", 00:28:51.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:51.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:51.044 "hdgst": false, 00:28:51.044 "ddgst": false 00:28:51.044 }, 00:28:51.044 "method": "bdev_nvme_attach_controller" 00:28:51.044 }' 00:28:51.044 [2024-12-10 14:30:51.567110] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:28:51.044 [2024-12-10 14:30:51.567156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1809248 ] 00:28:51.044 [2024-12-10 14:30:51.647826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.044 [2024-12-10 14:30:51.685226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.611 Running I/O for 15 seconds... 00:28:53.483 11493.00 IOPS, 44.89 MiB/s [2024-12-10T13:30:54.794Z] 11517.50 IOPS, 44.99 MiB/s [2024-12-10T13:30:54.794Z] 14:30:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1808766 00:28:54.054 14:30:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:54.054 [2024-12-10 14:30:54.543454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.054 [2024-12-10 14:30:54.543981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.054 [2024-12-10 14:30:54.543990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.543999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.055 [2024-12-10 14:30:54.544694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.055 [2024-12-10 14:30:54.544702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.056 [2024-12-10 14:30:54.544879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.544986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.544993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.056 [2024-12-10 14:30:54.545303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.056 [2024-12-10 14:30:54.545309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.057 [2024-12-10 14:30:54.545325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.057 [2024-12-10 14:30:54.545339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.057 [2024-12-10 14:30:54.545354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.057 [2024-12-10 14:30:54.545368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.057 [2024-12-10 14:30:54.545384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.057 [2024-12-10 14:30:54.545399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.057 [2024-12-10 14:30:54.545414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.057 [2024-12-10 14:30:54.545428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.057 [2024-12-10 14:30:54.545443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.057 [2024-12-10 14:30:54.545457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.057 [2024-12-10 14:30:54.545472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.057 [2024-12-10 14:30:54.545489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.057 [2024-12-10 14:30:54.545503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.057 [2024-12-10 14:30:54.545518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.057 [2024-12-10 14:30:54.545532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.057 [2024-12-10 14:30:54.545546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.057 [2024-12-10 14:30:54.545561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.057 [2024-12-10 14:30:54.545575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.545583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12927f0 is same with the state(6) to be set 00:28:54.057 [2024-12-10 14:30:54.545591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:54.057 [2024-12-10 14:30:54.545597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:54.057 [2024-12-10 14:30:54.545603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96528 len:8 PRP1 0x0 PRP2 0x0 00:28:54.057 [2024-12-10 14:30:54.545611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.057 [2024-12-10 14:30:54.548389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.057 [2024-12-10 14:30:54.548443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.057 [2024-12-10 14:30:54.548987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.057 [2024-12-10 14:30:54.549002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.057 [2024-12-10 14:30:54.549010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.057 [2024-12-10 14:30:54.549181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.057 [2024-12-10 14:30:54.549358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.057 [2024-12-10 14:30:54.549367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.057 [2024-12-10 14:30:54.549376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.057 [2024-12-10 14:30:54.549386] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.057 [2024-12-10 14:30:54.561387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.057 [2024-12-10 14:30:54.561741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.057 [2024-12-10 14:30:54.561789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.057 [2024-12-10 14:30:54.561814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.057 [2024-12-10 14:30:54.562347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.057 [2024-12-10 14:30:54.562520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.057 [2024-12-10 14:30:54.562532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.057 [2024-12-10 14:30:54.562542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.057 [2024-12-10 14:30:54.562551] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.057 [2024-12-10 14:30:54.574207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.057 [2024-12-10 14:30:54.574559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.057 [2024-12-10 14:30:54.574578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.057 [2024-12-10 14:30:54.574586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.057 [2024-12-10 14:30:54.574755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.057 [2024-12-10 14:30:54.574927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.057 [2024-12-10 14:30:54.574937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.057 [2024-12-10 14:30:54.574944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.057 [2024-12-10 14:30:54.574950] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.057 [2024-12-10 14:30:54.587027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.057 [2024-12-10 14:30:54.587450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.057 [2024-12-10 14:30:54.587470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.057 [2024-12-10 14:30:54.587489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.057 [2024-12-10 14:30:54.587651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.057 [2024-12-10 14:30:54.587812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.057 [2024-12-10 14:30:54.587821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.057 [2024-12-10 14:30:54.587828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.057 [2024-12-10 14:30:54.587834] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.057 [2024-12-10 14:30:54.600032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.057 [2024-12-10 14:30:54.600370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.057 [2024-12-10 14:30:54.600392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.057 [2024-12-10 14:30:54.600400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.057 [2024-12-10 14:30:54.600569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.057 [2024-12-10 14:30:54.600740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.057 [2024-12-10 14:30:54.600750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.057 [2024-12-10 14:30:54.600757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.057 [2024-12-10 14:30:54.600763] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.057 [2024-12-10 14:30:54.613005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.058 [2024-12-10 14:30:54.613366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.058 [2024-12-10 14:30:54.613385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.058 [2024-12-10 14:30:54.613393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.058 [2024-12-10 14:30:54.613553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.058 [2024-12-10 14:30:54.613715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.058 [2024-12-10 14:30:54.613725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.058 [2024-12-10 14:30:54.613731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.058 [2024-12-10 14:30:54.613738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.058 [2024-12-10 14:30:54.625941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.058 [2024-12-10 14:30:54.626282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.058 [2024-12-10 14:30:54.626300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.058 [2024-12-10 14:30:54.626308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.058 [2024-12-10 14:30:54.626477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.058 [2024-12-10 14:30:54.626648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.058 [2024-12-10 14:30:54.626657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.058 [2024-12-10 14:30:54.626664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.058 [2024-12-10 14:30:54.626672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.058 [2024-12-10 14:30:54.638846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.058 [2024-12-10 14:30:54.639275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.058 [2024-12-10 14:30:54.639292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.058 [2024-12-10 14:30:54.639300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.058 [2024-12-10 14:30:54.639479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.058 [2024-12-10 14:30:54.639639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.058 [2024-12-10 14:30:54.639649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.058 [2024-12-10 14:30:54.639654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.058 [2024-12-10 14:30:54.639661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.058 [2024-12-10 14:30:54.651671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.058 [2024-12-10 14:30:54.652027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.058 [2024-12-10 14:30:54.652044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.058 [2024-12-10 14:30:54.652053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.058 [2024-12-10 14:30:54.652230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.058 [2024-12-10 14:30:54.652402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.058 [2024-12-10 14:30:54.652412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.058 [2024-12-10 14:30:54.652420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.058 [2024-12-10 14:30:54.652426] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.058 [2024-12-10 14:30:54.664511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.058 [2024-12-10 14:30:54.664924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.058 [2024-12-10 14:30:54.664962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.058 [2024-12-10 14:30:54.664988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.058 [2024-12-10 14:30:54.665596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.058 [2024-12-10 14:30:54.665768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.058 [2024-12-10 14:30:54.665778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.058 [2024-12-10 14:30:54.665785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.058 [2024-12-10 14:30:54.665791] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.058 [2024-12-10 14:30:54.677301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.058 [2024-12-10 14:30:54.677661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.058 [2024-12-10 14:30:54.677679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.058 [2024-12-10 14:30:54.677687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.058 [2024-12-10 14:30:54.677856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.058 [2024-12-10 14:30:54.678025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.058 [2024-12-10 14:30:54.678038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.058 [2024-12-10 14:30:54.678045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.058 [2024-12-10 14:30:54.678052] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.058 [2024-12-10 14:30:54.690310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.058 [2024-12-10 14:30:54.690704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.058 [2024-12-10 14:30:54.690721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.058 [2024-12-10 14:30:54.690729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.058 [2024-12-10 14:30:54.690890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.058 [2024-12-10 14:30:54.691051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.058 [2024-12-10 14:30:54.691060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.058 [2024-12-10 14:30:54.691067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.058 [2024-12-10 14:30:54.691073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.058 [2024-12-10 14:30:54.703144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.058 [2024-12-10 14:30:54.703576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.058 [2024-12-10 14:30:54.703622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.058 [2024-12-10 14:30:54.703646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.058 [2024-12-10 14:30:54.704244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.058 [2024-12-10 14:30:54.704743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.058 [2024-12-10 14:30:54.704753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.058 [2024-12-10 14:30:54.704760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.058 [2024-12-10 14:30:54.704766] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.058 [2024-12-10 14:30:54.715988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.058 [2024-12-10 14:30:54.716336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.058 [2024-12-10 14:30:54.716353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.058 [2024-12-10 14:30:54.716360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.058 [2024-12-10 14:30:54.716519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.058 [2024-12-10 14:30:54.716681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.059 [2024-12-10 14:30:54.716690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.059 [2024-12-10 14:30:54.716697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.059 [2024-12-10 14:30:54.716707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.059 [2024-12-10 14:30:54.728780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.059 [2024-12-10 14:30:54.729206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.059 [2024-12-10 14:30:54.729266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.059 [2024-12-10 14:30:54.729291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.059 [2024-12-10 14:30:54.729709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.059 [2024-12-10 14:30:54.729871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.059 [2024-12-10 14:30:54.729881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.059 [2024-12-10 14:30:54.729887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.059 [2024-12-10 14:30:54.729893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.059 [2024-12-10 14:30:54.741658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.059 [2024-12-10 14:30:54.741997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.059 [2024-12-10 14:30:54.742013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.059 [2024-12-10 14:30:54.742021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.059 [2024-12-10 14:30:54.742181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.059 [2024-12-10 14:30:54.742369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.059 [2024-12-10 14:30:54.742379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.059 [2024-12-10 14:30:54.742386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.059 [2024-12-10 14:30:54.742393] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.059 [2024-12-10 14:30:54.754489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.059 [2024-12-10 14:30:54.754900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.059 [2024-12-10 14:30:54.754917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.059 [2024-12-10 14:30:54.754925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.059 [2024-12-10 14:30:54.755084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.059 [2024-12-10 14:30:54.755249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.059 [2024-12-10 14:30:54.755259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.059 [2024-12-10 14:30:54.755265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.059 [2024-12-10 14:30:54.755271] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.059 [2024-12-10 14:30:54.767306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.059 [2024-12-10 14:30:54.767720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.059 [2024-12-10 14:30:54.767741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.059 [2024-12-10 14:30:54.767748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.059 [2024-12-10 14:30:54.767908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.059 [2024-12-10 14:30:54.768068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.059 [2024-12-10 14:30:54.768078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.059 [2024-12-10 14:30:54.768084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.059 [2024-12-10 14:30:54.768090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.059 [2024-12-10 14:30:54.780153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.059 [2024-12-10 14:30:54.780510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.059 [2024-12-10 14:30:54.780527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.059 [2024-12-10 14:30:54.780534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.059 [2024-12-10 14:30:54.780694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.059 [2024-12-10 14:30:54.780855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.059 [2024-12-10 14:30:54.780865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.059 [2024-12-10 14:30:54.780871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.059 [2024-12-10 14:30:54.780877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.319 [2024-12-10 14:30:54.793210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.319 [2024-12-10 14:30:54.793620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.319 [2024-12-10 14:30:54.793663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.319 [2024-12-10 14:30:54.793686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.319 [2024-12-10 14:30:54.794100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.319 [2024-12-10 14:30:54.794292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.319 [2024-12-10 14:30:54.794302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.319 [2024-12-10 14:30:54.794309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.319 [2024-12-10 14:30:54.794316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.319 [2024-12-10 14:30:54.806283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.319 [2024-12-10 14:30:54.806645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.319 [2024-12-10 14:30:54.806662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.319 [2024-12-10 14:30:54.806670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.319 [2024-12-10 14:30:54.806848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.319 [2024-12-10 14:30:54.807024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.319 [2024-12-10 14:30:54.807033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.319 [2024-12-10 14:30:54.807041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.319 [2024-12-10 14:30:54.807048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.319 [2024-12-10 14:30:54.819389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.320 [2024-12-10 14:30:54.819759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.320 [2024-12-10 14:30:54.819776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.320 [2024-12-10 14:30:54.819783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.320 [2024-12-10 14:30:54.819957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.320 [2024-12-10 14:30:54.820132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.320 [2024-12-10 14:30:54.820141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.320 [2024-12-10 14:30:54.820148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.320 [2024-12-10 14:30:54.820155] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.320 [2024-12-10 14:30:54.832401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.320 [2024-12-10 14:30:54.832836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.320 [2024-12-10 14:30:54.832853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.320 [2024-12-10 14:30:54.832861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.320 [2024-12-10 14:30:54.833035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.320 [2024-12-10 14:30:54.833212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.320 [2024-12-10 14:30:54.833229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.320 [2024-12-10 14:30:54.833236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.320 [2024-12-10 14:30:54.833243] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.320 [2024-12-10 14:30:54.845335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.320 [2024-12-10 14:30:54.845758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.320 [2024-12-10 14:30:54.845776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.320 [2024-12-10 14:30:54.845784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.320 [2024-12-10 14:30:54.845953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.320 [2024-12-10 14:30:54.846123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.320 [2024-12-10 14:30:54.846136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.320 [2024-12-10 14:30:54.846143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.320 [2024-12-10 14:30:54.846149] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.320 [2024-12-10 14:30:54.858161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.320 [2024-12-10 14:30:54.858488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.320 [2024-12-10 14:30:54.858508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.320 [2024-12-10 14:30:54.858516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.320 [2024-12-10 14:30:54.858676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.320 [2024-12-10 14:30:54.858837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.320 [2024-12-10 14:30:54.858846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.320 [2024-12-10 14:30:54.858853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.320 [2024-12-10 14:30:54.858859] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.320 [2024-12-10 14:30:54.871005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.320 [2024-12-10 14:30:54.871360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.320 [2024-12-10 14:30:54.871406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.320 [2024-12-10 14:30:54.871430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.320 [2024-12-10 14:30:54.871933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.320 [2024-12-10 14:30:54.872095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.320 [2024-12-10 14:30:54.872105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.320 [2024-12-10 14:30:54.872111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.320 [2024-12-10 14:30:54.872118] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.320 [2024-12-10 14:30:54.883805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.320 [2024-12-10 14:30:54.884240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.320 [2024-12-10 14:30:54.884286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.320 [2024-12-10 14:30:54.884310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.320 [2024-12-10 14:30:54.884847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.320 [2024-12-10 14:30:54.885009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.320 [2024-12-10 14:30:54.885019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.320 [2024-12-10 14:30:54.885025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.320 [2024-12-10 14:30:54.885034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.320 [2024-12-10 14:30:54.896641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.320 [2024-12-10 14:30:54.896970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.320 [2024-12-10 14:30:54.896986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.320 [2024-12-10 14:30:54.896993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.320 [2024-12-10 14:30:54.897153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.320 [2024-12-10 14:30:54.897338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.320 [2024-12-10 14:30:54.897347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.320 [2024-12-10 14:30:54.897355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.320 [2024-12-10 14:30:54.897361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.320 [2024-12-10 14:30:54.909390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.320 [2024-12-10 14:30:54.909743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.320 [2024-12-10 14:30:54.909786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.320 [2024-12-10 14:30:54.909809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.320 [2024-12-10 14:30:54.910246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.320 [2024-12-10 14:30:54.910439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.320 [2024-12-10 14:30:54.910449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.320 [2024-12-10 14:30:54.910455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.320 [2024-12-10 14:30:54.910461] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.320 [2024-12-10 14:30:54.922135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.320 [2024-12-10 14:30:54.922531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.320 [2024-12-10 14:30:54.922548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.320 [2024-12-10 14:30:54.922556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.320 [2024-12-10 14:30:54.922716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.320 [2024-12-10 14:30:54.922876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.320 [2024-12-10 14:30:54.922886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.320 [2024-12-10 14:30:54.922892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.320 [2024-12-10 14:30:54.922898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.320 [2024-12-10 14:30:54.934957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.320 [2024-12-10 14:30:54.935375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.320 [2024-12-10 14:30:54.935398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.320 [2024-12-10 14:30:54.935406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.320 [2024-12-10 14:30:54.935566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.320 [2024-12-10 14:30:54.935727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.320 [2024-12-10 14:30:54.935737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.320 [2024-12-10 14:30:54.935743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.320 [2024-12-10 14:30:54.935749] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.321 [2024-12-10 14:30:54.947813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.321 [2024-12-10 14:30:54.948226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.321 [2024-12-10 14:30:54.948244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.321 [2024-12-10 14:30:54.948251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.321 [2024-12-10 14:30:54.948411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.321 [2024-12-10 14:30:54.948571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.321 [2024-12-10 14:30:54.948581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.321 [2024-12-10 14:30:54.948587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.321 [2024-12-10 14:30:54.948593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.321 [2024-12-10 14:30:54.960621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.321 [2024-12-10 14:30:54.960938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.321 [2024-12-10 14:30:54.960954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.321 [2024-12-10 14:30:54.960962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.321 [2024-12-10 14:30:54.961122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.321 [2024-12-10 14:30:54.961306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.321 [2024-12-10 14:30:54.961316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.321 [2024-12-10 14:30:54.961323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.321 [2024-12-10 14:30:54.961329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.321 [2024-12-10 14:30:54.973457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.321 [2024-12-10 14:30:54.973807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.321 [2024-12-10 14:30:54.973850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.321 [2024-12-10 14:30:54.973874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.321 [2024-12-10 14:30:54.974314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.321 [2024-12-10 14:30:54.974486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.321 [2024-12-10 14:30:54.974495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.321 [2024-12-10 14:30:54.974502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.321 [2024-12-10 14:30:54.974508] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.321 [2024-12-10 14:30:54.986295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.321 [2024-12-10 14:30:54.986711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.321 [2024-12-10 14:30:54.986728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.321 [2024-12-10 14:30:54.986735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.321 [2024-12-10 14:30:54.986896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.321 [2024-12-10 14:30:54.987056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.321 [2024-12-10 14:30:54.987065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.321 [2024-12-10 14:30:54.987071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.321 [2024-12-10 14:30:54.987078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.321 [2024-12-10 14:30:54.999140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.321 [2024-12-10 14:30:54.999436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.321 [2024-12-10 14:30:54.999454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.321 [2024-12-10 14:30:54.999461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.321 [2024-12-10 14:30:54.999634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.321 [2024-12-10 14:30:54.999795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.321 [2024-12-10 14:30:54.999804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.321 [2024-12-10 14:30:54.999811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.321 [2024-12-10 14:30:54.999816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.321 [2024-12-10 14:30:55.011956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.321 [2024-12-10 14:30:55.012365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.321 [2024-12-10 14:30:55.012382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.321 [2024-12-10 14:30:55.012389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.321 [2024-12-10 14:30:55.012550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.321 [2024-12-10 14:30:55.012710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.321 [2024-12-10 14:30:55.012722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.321 [2024-12-10 14:30:55.012728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.321 [2024-12-10 14:30:55.012734] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.321 [2024-12-10 14:30:55.024805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.321 [2024-12-10 14:30:55.025202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.321 [2024-12-10 14:30:55.025223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.321 [2024-12-10 14:30:55.025230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.321 [2024-12-10 14:30:55.025390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.321 [2024-12-10 14:30:55.025552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.321 [2024-12-10 14:30:55.025561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.321 [2024-12-10 14:30:55.025568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.321 [2024-12-10 14:30:55.025573] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.321 [2024-12-10 14:30:55.037582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.321 [2024-12-10 14:30:55.037999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.321 [2024-12-10 14:30:55.038016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.321 [2024-12-10 14:30:55.038023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.321 [2024-12-10 14:30:55.038183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.321 [2024-12-10 14:30:55.038371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.321 [2024-12-10 14:30:55.038381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.321 [2024-12-10 14:30:55.038387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.321 [2024-12-10 14:30:55.038394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.321 [2024-12-10 14:30:55.050376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.321 [2024-12-10 14:30:55.050716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.321 [2024-12-10 14:30:55.050733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.321 [2024-12-10 14:30:55.050741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.321 [2024-12-10 14:30:55.050911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.321 [2024-12-10 14:30:55.051080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.321 [2024-12-10 14:30:55.051090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.321 [2024-12-10 14:30:55.051097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.321 [2024-12-10 14:30:55.051108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.580 [2024-12-10 14:30:55.063414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.580 [2024-12-10 14:30:55.063790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.580 [2024-12-10 14:30:55.063808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.580 [2024-12-10 14:30:55.063817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.580 [2024-12-10 14:30:55.063992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.580 [2024-12-10 14:30:55.064167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.580 [2024-12-10 14:30:55.064177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.580 [2024-12-10 14:30:55.064184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.580 [2024-12-10 14:30:55.064191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.580 9478.00 IOPS, 37.02 MiB/s [2024-12-10T13:30:55.320Z] [2024-12-10 14:30:55.076442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.580 [2024-12-10 14:30:55.076864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.580 [2024-12-10 14:30:55.076905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.580 [2024-12-10 14:30:55.076931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.580 [2024-12-10 14:30:55.077500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.580 [2024-12-10 14:30:55.077672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.580 [2024-12-10 14:30:55.077682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.580 [2024-12-10 14:30:55.077690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.580 [2024-12-10 14:30:55.077698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.580 [2024-12-10 14:30:55.089347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.580 [2024-12-10 14:30:55.089789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.580 [2024-12-10 14:30:55.089806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.580 [2024-12-10 14:30:55.089814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.580 [2024-12-10 14:30:55.089974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.580 [2024-12-10 14:30:55.090134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.580 [2024-12-10 14:30:55.090144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.580 [2024-12-10 14:30:55.090150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.580 [2024-12-10 14:30:55.090156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.580 [2024-12-10 14:30:55.102230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.580 [2024-12-10 14:30:55.102588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.580 [2024-12-10 14:30:55.102605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.580 [2024-12-10 14:30:55.102613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.580 [2024-12-10 14:30:55.102781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.580 [2024-12-10 14:30:55.102951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.580 [2024-12-10 14:30:55.102961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.580 [2024-12-10 14:30:55.102968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.580 [2024-12-10 14:30:55.102974] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.580 [2024-12-10 14:30:55.115063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.580 [2024-12-10 14:30:55.115485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.580 [2024-12-10 14:30:55.115502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.580 [2024-12-10 14:30:55.115510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.580 [2024-12-10 14:30:55.115671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.580 [2024-12-10 14:30:55.115832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.580 [2024-12-10 14:30:55.115841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.580 [2024-12-10 14:30:55.115848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.580 [2024-12-10 14:30:55.115854] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.580 [2024-12-10 14:30:55.127855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.580 [2024-12-10 14:30:55.128403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.580 [2024-12-10 14:30:55.128422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.580 [2024-12-10 14:30:55.128430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.580 [2024-12-10 14:30:55.128592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.580 [2024-12-10 14:30:55.128753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.580 [2024-12-10 14:30:55.128762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.580 [2024-12-10 14:30:55.128768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.580 [2024-12-10 14:30:55.128775] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.580 [2024-12-10 14:30:55.140698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.580 [2024-12-10 14:30:55.141109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.580 [2024-12-10 14:30:55.141150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.580 [2024-12-10 14:30:55.141176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.580 [2024-12-10 14:30:55.141747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.580 [2024-12-10 14:30:55.141918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.580 [2024-12-10 14:30:55.141928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.580 [2024-12-10 14:30:55.141934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.580 [2024-12-10 14:30:55.141941] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.580 [2024-12-10 14:30:55.153546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.580 [2024-12-10 14:30:55.153937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.580 [2024-12-10 14:30:55.153954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.580 [2024-12-10 14:30:55.153961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.580 [2024-12-10 14:30:55.154121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.580 [2024-12-10 14:30:55.154303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.580 [2024-12-10 14:30:55.154314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.580 [2024-12-10 14:30:55.154320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.580 [2024-12-10 14:30:55.154327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.580 [2024-12-10 14:30:55.166308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.580 [2024-12-10 14:30:55.166723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.580 [2024-12-10 14:30:55.166739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.580 [2024-12-10 14:30:55.166747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.580 [2024-12-10 14:30:55.166907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.580 [2024-12-10 14:30:55.167067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.580 [2024-12-10 14:30:55.167076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.580 [2024-12-10 14:30:55.167083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.580 [2024-12-10 14:30:55.167090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.580 [2024-12-10 14:30:55.179141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.580 [2024-12-10 14:30:55.179542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.580 [2024-12-10 14:30:55.179587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.580 [2024-12-10 14:30:55.179613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.580 [2024-12-10 14:30:55.180031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.580 [2024-12-10 14:30:55.180193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.580 [2024-12-10 14:30:55.180206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.580 [2024-12-10 14:30:55.180212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.580 [2024-12-10 14:30:55.180224] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.580 [2024-12-10 14:30:55.191922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.580 [2024-12-10 14:30:55.192344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.580 [2024-12-10 14:30:55.192403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.580 [2024-12-10 14:30:55.192427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.580 [2024-12-10 14:30:55.192901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.580 [2024-12-10 14:30:55.193072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.580 [2024-12-10 14:30:55.193082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.580 [2024-12-10 14:30:55.193088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.580 [2024-12-10 14:30:55.193094] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.580 [2024-12-10 14:30:55.204668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.580 [2024-12-10 14:30:55.205059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.580 [2024-12-10 14:30:55.205076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.580 [2024-12-10 14:30:55.205084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.580 [2024-12-10 14:30:55.205266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.580 [2024-12-10 14:30:55.205437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.580 [2024-12-10 14:30:55.205447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.580 [2024-12-10 14:30:55.205453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.580 [2024-12-10 14:30:55.205459] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.580 [2024-12-10 14:30:55.217553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.580 [2024-12-10 14:30:55.217965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.580 [2024-12-10 14:30:55.217982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.580 [2024-12-10 14:30:55.217990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.580 [2024-12-10 14:30:55.218150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.580 [2024-12-10 14:30:55.218335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.580 [2024-12-10 14:30:55.218345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.580 [2024-12-10 14:30:55.218352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.580 [2024-12-10 14:30:55.218362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.580 [2024-12-10 14:30:55.230343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.580 [2024-12-10 14:30:55.230756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.580 [2024-12-10 14:30:55.230773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.580 [2024-12-10 14:30:55.230781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.580 [2024-12-10 14:30:55.230941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.580 [2024-12-10 14:30:55.231102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.580 [2024-12-10 14:30:55.231111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.580 [2024-12-10 14:30:55.231117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.580 [2024-12-10 14:30:55.231124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.581 [2024-12-10 14:30:55.243095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.581 [2024-12-10 14:30:55.243518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.581 [2024-12-10 14:30:55.243562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.581 [2024-12-10 14:30:55.243586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.581 [2024-12-10 14:30:55.243998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.581 [2024-12-10 14:30:55.244160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.581 [2024-12-10 14:30:55.244169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.581 [2024-12-10 14:30:55.244175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.581 [2024-12-10 14:30:55.244181] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.581 [2024-12-10 14:30:55.255897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.581 [2024-12-10 14:30:55.256295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.581 [2024-12-10 14:30:55.256313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.581 [2024-12-10 14:30:55.256321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.581 [2024-12-10 14:30:55.256495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.581 [2024-12-10 14:30:55.256656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.581 [2024-12-10 14:30:55.256665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.581 [2024-12-10 14:30:55.256671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.581 [2024-12-10 14:30:55.256678] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.581 [2024-12-10 14:30:55.268774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.581 [2024-12-10 14:30:55.269104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.581 [2024-12-10 14:30:55.269122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.581 [2024-12-10 14:30:55.269130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.581 [2024-12-10 14:30:55.269304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.581 [2024-12-10 14:30:55.269485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.581 [2024-12-10 14:30:55.269494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.581 [2024-12-10 14:30:55.269500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.581 [2024-12-10 14:30:55.269506] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.581 [2024-12-10 14:30:55.281682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.581 [2024-12-10 14:30:55.282054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.581 [2024-12-10 14:30:55.282072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.581 [2024-12-10 14:30:55.282080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.581 [2024-12-10 14:30:55.282244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.581 [2024-12-10 14:30:55.282405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.581 [2024-12-10 14:30:55.282416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.581 [2024-12-10 14:30:55.282422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.581 [2024-12-10 14:30:55.282429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.581 [2024-12-10 14:30:55.294539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.581 [2024-12-10 14:30:55.294878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.581 [2024-12-10 14:30:55.294895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.581 [2024-12-10 14:30:55.294902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.581 [2024-12-10 14:30:55.295063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.581 [2024-12-10 14:30:55.295227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.581 [2024-12-10 14:30:55.295238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.581 [2024-12-10 14:30:55.295245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.581 [2024-12-10 14:30:55.295251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.581 [2024-12-10 14:30:55.307655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.581 [2024-12-10 14:30:55.308078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.581 [2024-12-10 14:30:55.308095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.581 [2024-12-10 14:30:55.308104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.581 [2024-12-10 14:30:55.308286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.581 [2024-12-10 14:30:55.308461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.581 [2024-12-10 14:30:55.308471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.581 [2024-12-10 14:30:55.308478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.581 [2024-12-10 14:30:55.308484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.840 [2024-12-10 14:30:55.320645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.840 [2024-12-10 14:30:55.321028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-12-10 14:30:55.321045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.840 [2024-12-10 14:30:55.321053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.840 [2024-12-10 14:30:55.321231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.840 [2024-12-10 14:30:55.321406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.840 [2024-12-10 14:30:55.321415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.840 [2024-12-10 14:30:55.321422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.840 [2024-12-10 14:30:55.321429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.840 [2024-12-10 14:30:55.333673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.840 [2024-12-10 14:30:55.334030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-12-10 14:30:55.334048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.840 [2024-12-10 14:30:55.334055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.840 [2024-12-10 14:30:55.334230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.840 [2024-12-10 14:30:55.334401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.840 [2024-12-10 14:30:55.334411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.840 [2024-12-10 14:30:55.334417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.840 [2024-12-10 14:30:55.334423] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.840 [2024-12-10 14:30:55.346558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.840 [2024-12-10 14:30:55.346853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-12-10 14:30:55.346870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.840 [2024-12-10 14:30:55.346878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.840 [2024-12-10 14:30:55.347046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.840 [2024-12-10 14:30:55.347223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.840 [2024-12-10 14:30:55.347237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.840 [2024-12-10 14:30:55.347244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.840 [2024-12-10 14:30:55.347251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.840 [2024-12-10 14:30:55.359502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.840 [2024-12-10 14:30:55.359780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-12-10 14:30:55.359796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.840 [2024-12-10 14:30:55.359803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.840 [2024-12-10 14:30:55.359962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.840 [2024-12-10 14:30:55.360123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.841 [2024-12-10 14:30:55.360132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.841 [2024-12-10 14:30:55.360138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.841 [2024-12-10 14:30:55.360145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.841 [2024-12-10 14:30:55.372465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.841 [2024-12-10 14:30:55.372732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-12-10 14:30:55.372748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.841 [2024-12-10 14:30:55.372756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.841 [2024-12-10 14:30:55.372916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.841 [2024-12-10 14:30:55.373077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.841 [2024-12-10 14:30:55.373086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.841 [2024-12-10 14:30:55.373093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.841 [2024-12-10 14:30:55.373100] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.841 [2024-12-10 14:30:55.385367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.841 [2024-12-10 14:30:55.385639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-12-10 14:30:55.385656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.841 [2024-12-10 14:30:55.385663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.841 [2024-12-10 14:30:55.385823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.841 [2024-12-10 14:30:55.385983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.841 [2024-12-10 14:30:55.385993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.841 [2024-12-10 14:30:55.385999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.841 [2024-12-10 14:30:55.386008] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.841 [2024-12-10 14:30:55.398177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.841 [2024-12-10 14:30:55.398509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-12-10 14:30:55.398527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.841 [2024-12-10 14:30:55.398534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.841 [2024-12-10 14:30:55.398695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.841 [2024-12-10 14:30:55.398873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.841 [2024-12-10 14:30:55.398882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.841 [2024-12-10 14:30:55.398889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.841 [2024-12-10 14:30:55.398895] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.841 [2024-12-10 14:30:55.410978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.841 [2024-12-10 14:30:55.411257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-12-10 14:30:55.411275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.841 [2024-12-10 14:30:55.411283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.841 [2024-12-10 14:30:55.411453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.841 [2024-12-10 14:30:55.411623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.841 [2024-12-10 14:30:55.411633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.841 [2024-12-10 14:30:55.411640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.841 [2024-12-10 14:30:55.411646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.841 [2024-12-10 14:30:55.423831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.841 [2024-12-10 14:30:55.424716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-12-10 14:30:55.424739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.841 [2024-12-10 14:30:55.424749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.841 [2024-12-10 14:30:55.424926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.841 [2024-12-10 14:30:55.425100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.841 [2024-12-10 14:30:55.425109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.841 [2024-12-10 14:30:55.425116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.841 [2024-12-10 14:30:55.425123] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.841 [2024-12-10 14:30:55.436679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.841 [2024-12-10 14:30:55.437016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-12-10 14:30:55.437034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.841 [2024-12-10 14:30:55.437042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.841 [2024-12-10 14:30:55.437212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.841 [2024-12-10 14:30:55.437389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.841 [2024-12-10 14:30:55.437399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.841 [2024-12-10 14:30:55.437406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.841 [2024-12-10 14:30:55.437413] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.841 [2024-12-10 14:30:55.449588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.841 [2024-12-10 14:30:55.449904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-12-10 14:30:55.449921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.841 [2024-12-10 14:30:55.449928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.841 [2024-12-10 14:30:55.450088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.841 [2024-12-10 14:30:55.450253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.841 [2024-12-10 14:30:55.450263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.841 [2024-12-10 14:30:55.450270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.841 [2024-12-10 14:30:55.450277] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.841 [2024-12-10 14:30:55.462436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.841 [2024-12-10 14:30:55.462777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-12-10 14:30:55.462794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.841 [2024-12-10 14:30:55.462802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.841 [2024-12-10 14:30:55.462962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.841 [2024-12-10 14:30:55.463123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.841 [2024-12-10 14:30:55.463132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.841 [2024-12-10 14:30:55.463139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.841 [2024-12-10 14:30:55.463145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.841 [2024-12-10 14:30:55.475310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.841 [2024-12-10 14:30:55.475594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-12-10 14:30:55.475611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.841 [2024-12-10 14:30:55.475619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.841 [2024-12-10 14:30:55.475783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.841 [2024-12-10 14:30:55.475944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.841 [2024-12-10 14:30:55.475954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.841 [2024-12-10 14:30:55.475960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.841 [2024-12-10 14:30:55.475966] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.841 [2024-12-10 14:30:55.488222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.841 [2024-12-10 14:30:55.488583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-12-10 14:30:55.488628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.842 [2024-12-10 14:30:55.488652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.842 [2024-12-10 14:30:55.489250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.842 [2024-12-10 14:30:55.489649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.842 [2024-12-10 14:30:55.489658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.842 [2024-12-10 14:30:55.489665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.842 [2024-12-10 14:30:55.489671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.842 [2024-12-10 14:30:55.501145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.842 [2024-12-10 14:30:55.501463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-12-10 14:30:55.501480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.842 [2024-12-10 14:30:55.501487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.842 [2024-12-10 14:30:55.501648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.842 [2024-12-10 14:30:55.501810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.842 [2024-12-10 14:30:55.501820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.842 [2024-12-10 14:30:55.501826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.842 [2024-12-10 14:30:55.501832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.842 [2024-12-10 14:30:55.514108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.842 [2024-12-10 14:30:55.514375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-12-10 14:30:55.514393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.842 [2024-12-10 14:30:55.514401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.842 [2024-12-10 14:30:55.514561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.842 [2024-12-10 14:30:55.514722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.842 [2024-12-10 14:30:55.514735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.842 [2024-12-10 14:30:55.514741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.842 [2024-12-10 14:30:55.514747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.842 [2024-12-10 14:30:55.526959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.842 [2024-12-10 14:30:55.527257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-12-10 14:30:55.527303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.842 [2024-12-10 14:30:55.527328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.842 [2024-12-10 14:30:55.527857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.842 [2024-12-10 14:30:55.528235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.842 [2024-12-10 14:30:55.528253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.842 [2024-12-10 14:30:55.528267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.842 [2024-12-10 14:30:55.528281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.842 [2024-12-10 14:30:55.541674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.842 [2024-12-10 14:30:55.542144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-12-10 14:30:55.542166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.842 [2024-12-10 14:30:55.542176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.842 [2024-12-10 14:30:55.542417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.842 [2024-12-10 14:30:55.542657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.842 [2024-12-10 14:30:55.542669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.842 [2024-12-10 14:30:55.542678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.842 [2024-12-10 14:30:55.542687] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.842 [2024-12-10 14:30:55.554572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.842 [2024-12-10 14:30:55.554888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-12-10 14:30:55.554904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.842 [2024-12-10 14:30:55.554911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.842 [2024-12-10 14:30:55.555072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.842 [2024-12-10 14:30:55.555239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.842 [2024-12-10 14:30:55.555247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.842 [2024-12-10 14:30:55.555254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.842 [2024-12-10 14:30:55.555263] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.842 [2024-12-10 14:30:55.567465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.842 [2024-12-10 14:30:55.567802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-12-10 14:30:55.567820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:54.842 [2024-12-10 14:30:55.567828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:54.842 [2024-12-10 14:30:55.568002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:54.842 [2024-12-10 14:30:55.568177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.842 [2024-12-10 14:30:55.568187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.842 [2024-12-10 14:30:55.568194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.842 [2024-12-10 14:30:55.568201] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.103 [2024-12-10 14:30:55.580589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.103 [2024-12-10 14:30:55.580878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.103 [2024-12-10 14:30:55.580896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.103 [2024-12-10 14:30:55.580904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.103 [2024-12-10 14:30:55.581078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.103 [2024-12-10 14:30:55.581257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.103 [2024-12-10 14:30:55.581267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.103 [2024-12-10 14:30:55.581275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.103 [2024-12-10 14:30:55.581281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.103 [2024-12-10 14:30:55.593761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.103 [2024-12-10 14:30:55.594048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.103 [2024-12-10 14:30:55.594066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.103 [2024-12-10 14:30:55.594074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.103 [2024-12-10 14:30:55.594254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.103 [2024-12-10 14:30:55.594442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.103 [2024-12-10 14:30:55.594452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.103 [2024-12-10 14:30:55.594458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.103 [2024-12-10 14:30:55.594465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.103 [2024-12-10 14:30:55.606831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.103 [2024-12-10 14:30:55.607182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.103 [2024-12-10 14:30:55.607200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.103 [2024-12-10 14:30:55.607207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.103 [2024-12-10 14:30:55.607387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.103 [2024-12-10 14:30:55.607563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.103 [2024-12-10 14:30:55.607573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.103 [2024-12-10 14:30:55.607579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.103 [2024-12-10 14:30:55.607586] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.103 [2024-12-10 14:30:55.620105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.103 [2024-12-10 14:30:55.620489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.103 [2024-12-10 14:30:55.620508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.103 [2024-12-10 14:30:55.620516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.103 [2024-12-10 14:30:55.620702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.103 [2024-12-10 14:30:55.620889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.103 [2024-12-10 14:30:55.620899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.103 [2024-12-10 14:30:55.620907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.103 [2024-12-10 14:30:55.620914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.103 [2024-12-10 14:30:55.633387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.103 [2024-12-10 14:30:55.633780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.103 [2024-12-10 14:30:55.633797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.103 [2024-12-10 14:30:55.633805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.103 [2024-12-10 14:30:55.633990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.103 [2024-12-10 14:30:55.634177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.103 [2024-12-10 14:30:55.634187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.103 [2024-12-10 14:30:55.634195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.103 [2024-12-10 14:30:55.634203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.103 [2024-12-10 14:30:55.646475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.103 [2024-12-10 14:30:55.646807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.103 [2024-12-10 14:30:55.646824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.103 [2024-12-10 14:30:55.646832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.103 [2024-12-10 14:30:55.647008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.103 [2024-12-10 14:30:55.647183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.103 [2024-12-10 14:30:55.647193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.103 [2024-12-10 14:30:55.647200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.103 [2024-12-10 14:30:55.647206] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.103 [2024-12-10 14:30:55.659731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.103 [2024-12-10 14:30:55.660103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.103 [2024-12-10 14:30:55.660121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.103 [2024-12-10 14:30:55.660130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.103 [2024-12-10 14:30:55.660322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.103 [2024-12-10 14:30:55.660509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.103 [2024-12-10 14:30:55.660519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.103 [2024-12-10 14:30:55.660526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.104 [2024-12-10 14:30:55.660532] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.104 [2024-12-10 14:30:55.672752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.104 [2024-12-10 14:30:55.673113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-12-10 14:30:55.673157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-12-10 14:30:55.673181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.104 [2024-12-10 14:30:55.673738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.104 [2024-12-10 14:30:55.673916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.104 [2024-12-10 14:30:55.673926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.104 [2024-12-10 14:30:55.673933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.104 [2024-12-10 14:30:55.673939] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.104 [2024-12-10 14:30:55.685736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.104 [2024-12-10 14:30:55.686074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-12-10 14:30:55.686092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-12-10 14:30:55.686100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.104 [2024-12-10 14:30:55.686283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.104 [2024-12-10 14:30:55.686460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.104 [2024-12-10 14:30:55.686473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.104 [2024-12-10 14:30:55.686480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.104 [2024-12-10 14:30:55.686488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.104 [2024-12-10 14:30:55.698829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.104 [2024-12-10 14:30:55.699233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-12-10 14:30:55.699251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-12-10 14:30:55.699259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.104 [2024-12-10 14:30:55.699429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.104 [2024-12-10 14:30:55.699600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.104 [2024-12-10 14:30:55.699610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.104 [2024-12-10 14:30:55.699617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.104 [2024-12-10 14:30:55.699624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.104 [2024-12-10 14:30:55.711720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.104 [2024-12-10 14:30:55.712114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-12-10 14:30:55.712158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-12-10 14:30:55.712182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.104 [2024-12-10 14:30:55.712787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.104 [2024-12-10 14:30:55.713004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.104 [2024-12-10 14:30:55.713014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.104 [2024-12-10 14:30:55.713021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.104 [2024-12-10 14:30:55.713028] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.104 [2024-12-10 14:30:55.724526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.104 [2024-12-10 14:30:55.724943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-12-10 14:30:55.724960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-12-10 14:30:55.724967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.104 [2024-12-10 14:30:55.725127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.104 [2024-12-10 14:30:55.725296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.104 [2024-12-10 14:30:55.725307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.104 [2024-12-10 14:30:55.725313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.104 [2024-12-10 14:30:55.725322] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.104 [2024-12-10 14:30:55.737406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.104 [2024-12-10 14:30:55.737813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-12-10 14:30:55.737858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-12-10 14:30:55.737881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.104 [2024-12-10 14:30:55.738487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.104 [2024-12-10 14:30:55.739080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.104 [2024-12-10 14:30:55.739108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.104 [2024-12-10 14:30:55.739114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.104 [2024-12-10 14:30:55.739121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.104 [2024-12-10 14:30:55.750273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.104 [2024-12-10 14:30:55.750616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-12-10 14:30:55.750632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-12-10 14:30:55.750639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.104 [2024-12-10 14:30:55.750799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.104 [2024-12-10 14:30:55.750960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.104 [2024-12-10 14:30:55.750969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.104 [2024-12-10 14:30:55.750975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.104 [2024-12-10 14:30:55.750981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.104 [2024-12-10 14:30:55.763033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.104 [2024-12-10 14:30:55.763426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-12-10 14:30:55.763443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-12-10 14:30:55.763451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.104 [2024-12-10 14:30:55.763612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.104 [2024-12-10 14:30:55.763773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.104 [2024-12-10 14:30:55.763782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.104 [2024-12-10 14:30:55.763788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.104 [2024-12-10 14:30:55.763794] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.104 [2024-12-10 14:30:55.775844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.104 [2024-12-10 14:30:55.776276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-12-10 14:30:55.776329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-12-10 14:30:55.776353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.104 [2024-12-10 14:30:55.776946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.104 [2024-12-10 14:30:55.777171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.104 [2024-12-10 14:30:55.777181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.104 [2024-12-10 14:30:55.777187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.104 [2024-12-10 14:30:55.777193] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.104 [2024-12-10 14:30:55.788723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.104 [2024-12-10 14:30:55.789118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-12-10 14:30:55.789136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-12-10 14:30:55.789143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.104 [2024-12-10 14:30:55.789311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.104 [2024-12-10 14:30:55.789473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.104 [2024-12-10 14:30:55.789482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.104 [2024-12-10 14:30:55.789488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.104 [2024-12-10 14:30:55.789494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.104 [2024-12-10 14:30:55.801529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.105 [2024-12-10 14:30:55.801927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-12-10 14:30:55.801944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.105 [2024-12-10 14:30:55.801951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.105 [2024-12-10 14:30:55.802112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.105 [2024-12-10 14:30:55.802279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.105 [2024-12-10 14:30:55.802290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.105 [2024-12-10 14:30:55.802297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.105 [2024-12-10 14:30:55.802304] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.105 [2024-12-10 14:30:55.814343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.105 [2024-12-10 14:30:55.814755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-12-10 14:30:55.814772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.105 [2024-12-10 14:30:55.814779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.105 [2024-12-10 14:30:55.814942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.105 [2024-12-10 14:30:55.815103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.105 [2024-12-10 14:30:55.815112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.105 [2024-12-10 14:30:55.815118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.105 [2024-12-10 14:30:55.815124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.105 [2024-12-10 14:30:55.827107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.105 [2024-12-10 14:30:55.827504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-12-10 14:30:55.827522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.105 [2024-12-10 14:30:55.827529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.105 [2024-12-10 14:30:55.827700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.105 [2024-12-10 14:30:55.827870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.105 [2024-12-10 14:30:55.827879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.105 [2024-12-10 14:30:55.827886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.105 [2024-12-10 14:30:55.827892] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.105 [2024-12-10 14:30:55.840228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.105 [2024-12-10 14:30:55.840580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-12-10 14:30:55.840597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.105 [2024-12-10 14:30:55.840605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.105 [2024-12-10 14:30:55.840779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.365 [2024-12-10 14:30:55.840955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.365 [2024-12-10 14:30:55.840965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.365 [2024-12-10 14:30:55.840972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.365 [2024-12-10 14:30:55.840979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.365 [2024-12-10 14:30:55.853268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.365 [2024-12-10 14:30:55.853695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.365 [2024-12-10 14:30:55.853740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.365 [2024-12-10 14:30:55.853763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.365 [2024-12-10 14:30:55.854366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.365 [2024-12-10 14:30:55.854702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.365 [2024-12-10 14:30:55.854714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.365 [2024-12-10 14:30:55.854721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.365 [2024-12-10 14:30:55.854727] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.365 [2024-12-10 14:30:55.866150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.365 [2024-12-10 14:30:55.866491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.365 [2024-12-10 14:30:55.866507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.365 [2024-12-10 14:30:55.866515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.365 [2024-12-10 14:30:55.866675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.365 [2024-12-10 14:30:55.866836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.365 [2024-12-10 14:30:55.866845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.365 [2024-12-10 14:30:55.866852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.365 [2024-12-10 14:30:55.866858] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.365 [2024-12-10 14:30:55.878992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.365 [2024-12-10 14:30:55.879410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.365 [2024-12-10 14:30:55.879451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.365 [2024-12-10 14:30:55.879476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.365 [2024-12-10 14:30:55.880063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.365 [2024-12-10 14:30:55.880561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.365 [2024-12-10 14:30:55.880572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.365 [2024-12-10 14:30:55.880578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.365 [2024-12-10 14:30:55.880584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.365 [2024-12-10 14:30:55.891806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.365 [2024-12-10 14:30:55.892205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.365 [2024-12-10 14:30:55.892260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.365 [2024-12-10 14:30:55.892284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.365 [2024-12-10 14:30:55.892745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.365 [2024-12-10 14:30:55.892907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.365 [2024-12-10 14:30:55.892914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.365 [2024-12-10 14:30:55.892921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.365 [2024-12-10 14:30:55.892930] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.365 [2024-12-10 14:30:55.904664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.365 [2024-12-10 14:30:55.905059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.365 [2024-12-10 14:30:55.905076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.365 [2024-12-10 14:30:55.905084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.365 [2024-12-10 14:30:55.905252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.365 [2024-12-10 14:30:55.905415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.365 [2024-12-10 14:30:55.905424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.365 [2024-12-10 14:30:55.905430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.365 [2024-12-10 14:30:55.905436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.365 [2024-12-10 14:30:55.917481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.365 [2024-12-10 14:30:55.917894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.365 [2024-12-10 14:30:55.917911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.365 [2024-12-10 14:30:55.917918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.365 [2024-12-10 14:30:55.918078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.365 [2024-12-10 14:30:55.918246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.365 [2024-12-10 14:30:55.918257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.365 [2024-12-10 14:30:55.918264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.365 [2024-12-10 14:30:55.918270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.365 [2024-12-10 14:30:55.930239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.365 [2024-12-10 14:30:55.930662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.365 [2024-12-10 14:30:55.930702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.365 [2024-12-10 14:30:55.930728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.366 [2024-12-10 14:30:55.931333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.366 [2024-12-10 14:30:55.931798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.366 [2024-12-10 14:30:55.931808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.366 [2024-12-10 14:30:55.931814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.366 [2024-12-10 14:30:55.931820] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.366 [2024-12-10 14:30:55.943095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.366 [2024-12-10 14:30:55.943512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.366 [2024-12-10 14:30:55.943533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.366 [2024-12-10 14:30:55.943541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.366 [2024-12-10 14:30:55.943702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.366 [2024-12-10 14:30:55.943862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.366 [2024-12-10 14:30:55.943871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.366 [2024-12-10 14:30:55.943878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.366 [2024-12-10 14:30:55.943884] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.366 [2024-12-10 14:30:55.955923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.366 [2024-12-10 14:30:55.956322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.366 [2024-12-10 14:30:55.956367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.366 [2024-12-10 14:30:55.956391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.366 [2024-12-10 14:30:55.956604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.366 [2024-12-10 14:30:55.956767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.366 [2024-12-10 14:30:55.956776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.366 [2024-12-10 14:30:55.956783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.366 [2024-12-10 14:30:55.956789] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.366 [2024-12-10 14:30:55.968671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.366 [2024-12-10 14:30:55.969105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.366 [2024-12-10 14:30:55.969149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.366 [2024-12-10 14:30:55.969172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.366 [2024-12-10 14:30:55.969727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.366 [2024-12-10 14:30:55.969890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.366 [2024-12-10 14:30:55.969899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.366 [2024-12-10 14:30:55.969905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.366 [2024-12-10 14:30:55.969911] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.366 [2024-12-10 14:30:55.981503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.366 [2024-12-10 14:30:55.981891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.366 [2024-12-10 14:30:55.981908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.366 [2024-12-10 14:30:55.981915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.366 [2024-12-10 14:30:55.982079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.366 [2024-12-10 14:30:55.982247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.366 [2024-12-10 14:30:55.982258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.366 [2024-12-10 14:30:55.982265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.366 [2024-12-10 14:30:55.982272] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.366 [2024-12-10 14:30:55.994391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.366 [2024-12-10 14:30:55.994813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.366 [2024-12-10 14:30:55.994856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.366 [2024-12-10 14:30:55.994880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.366 [2024-12-10 14:30:55.995492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.366 [2024-12-10 14:30:55.995875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.366 [2024-12-10 14:30:55.995884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.366 [2024-12-10 14:30:55.995891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.366 [2024-12-10 14:30:55.995897] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.366 [2024-12-10 14:30:56.007174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.366 [2024-12-10 14:30:56.007597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.366 [2024-12-10 14:30:56.007642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.366 [2024-12-10 14:30:56.007666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.366 [2024-12-10 14:30:56.008052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.366 [2024-12-10 14:30:56.008215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.366 [2024-12-10 14:30:56.008233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.366 [2024-12-10 14:30:56.008242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.366 [2024-12-10 14:30:56.008250] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.366 [2024-12-10 14:30:56.019989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.366 [2024-12-10 14:30:56.020333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.366 [2024-12-10 14:30:56.020351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.366 [2024-12-10 14:30:56.020358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.366 [2024-12-10 14:30:56.020520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.366 [2024-12-10 14:30:56.020681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.366 [2024-12-10 14:30:56.020695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.366 [2024-12-10 14:30:56.020702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.366 [2024-12-10 14:30:56.020708] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.366 [2024-12-10 14:30:56.032849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.366 [2024-12-10 14:30:56.033279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.366 [2024-12-10 14:30:56.033325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.366 [2024-12-10 14:30:56.033349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.366 [2024-12-10 14:30:56.033850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.366 [2024-12-10 14:30:56.034013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.366 [2024-12-10 14:30:56.034023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.366 [2024-12-10 14:30:56.034030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.366 [2024-12-10 14:30:56.034036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.366 [2024-12-10 14:30:56.045622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.366 [2024-12-10 14:30:56.046007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.366 [2024-12-10 14:30:56.046024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.366 [2024-12-10 14:30:56.046031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.366 [2024-12-10 14:30:56.046192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.366 [2024-12-10 14:30:56.046361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.366 [2024-12-10 14:30:56.046372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.366 [2024-12-10 14:30:56.046378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.366 [2024-12-10 14:30:56.046384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.366 [2024-12-10 14:30:56.058419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.366 [2024-12-10 14:30:56.058832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.366 [2024-12-10 14:30:56.058849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.366 [2024-12-10 14:30:56.058856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.366 [2024-12-10 14:30:56.059017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.367 [2024-12-10 14:30:56.059177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.367 [2024-12-10 14:30:56.059187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.367 [2024-12-10 14:30:56.059193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.367 [2024-12-10 14:30:56.059202] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.367 7108.50 IOPS, 27.77 MiB/s [2024-12-10T13:30:56.107Z] [2024-12-10 14:30:56.071297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.367 [2024-12-10 14:30:56.071710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.367 [2024-12-10 14:30:56.071727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.367 [2024-12-10 14:30:56.071734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.367 [2024-12-10 14:30:56.071896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.367 [2024-12-10 14:30:56.072057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.367 [2024-12-10 14:30:56.072066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.367 [2024-12-10 14:30:56.072073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.367 [2024-12-10 14:30:56.072080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.367 [2024-12-10 14:30:56.084104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.367 [2024-12-10 14:30:56.084531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.367 [2024-12-10 14:30:56.084549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.367 [2024-12-10 14:30:56.084557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.367 [2024-12-10 14:30:56.084732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.367 [2024-12-10 14:30:56.084907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.367 [2024-12-10 14:30:56.084917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.367 [2024-12-10 14:30:56.084924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.367 [2024-12-10 14:30:56.084931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.367 [2024-12-10 14:30:56.097136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.367 [2024-12-10 14:30:56.097568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.367 [2024-12-10 14:30:56.097586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.367 [2024-12-10 14:30:56.097594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.367 [2024-12-10 14:30:56.097769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.367 [2024-12-10 14:30:56.097944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.367 [2024-12-10 14:30:56.097954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.367 [2024-12-10 14:30:56.097963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.367 [2024-12-10 14:30:56.097970] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.627 [2024-12-10 14:30:56.110222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.627 [2024-12-10 14:30:56.110664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.627 [2024-12-10 14:30:56.110709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.627 [2024-12-10 14:30:56.110732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.627 [2024-12-10 14:30:56.111317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.627 [2024-12-10 14:30:56.111536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.627 [2024-12-10 14:30:56.111547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.627 [2024-12-10 14:30:56.111554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.627 [2024-12-10 14:30:56.111561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.627 [2024-12-10 14:30:56.123003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.627 [2024-12-10 14:30:56.123350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.627 [2024-12-10 14:30:56.123368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.627 [2024-12-10 14:30:56.123375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.627 [2024-12-10 14:30:56.123536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.627 [2024-12-10 14:30:56.123697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.627 [2024-12-10 14:30:56.123707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.627 [2024-12-10 14:30:56.123713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.627 [2024-12-10 14:30:56.123719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.627 [2024-12-10 14:30:56.135789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.627 [2024-12-10 14:30:56.136066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.627 [2024-12-10 14:30:56.136083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.627 [2024-12-10 14:30:56.136090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.627 [2024-12-10 14:30:56.136260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.627 [2024-12-10 14:30:56.136422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.627 [2024-12-10 14:30:56.136431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.627 [2024-12-10 14:30:56.136438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.627 [2024-12-10 14:30:56.136444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.627 [2024-12-10 14:30:56.148629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.627 [2024-12-10 14:30:56.149043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.627 [2024-12-10 14:30:56.149060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.627 [2024-12-10 14:30:56.149067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.627 [2024-12-10 14:30:56.149240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.627 [2024-12-10 14:30:56.149401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.627 [2024-12-10 14:30:56.149410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.627 [2024-12-10 14:30:56.149417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.627 [2024-12-10 14:30:56.149423] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.627 [2024-12-10 14:30:56.161432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.627 [2024-12-10 14:30:56.161850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.627 [2024-12-10 14:30:56.161867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.627 [2024-12-10 14:30:56.161875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.627 [2024-12-10 14:30:56.162035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.627 [2024-12-10 14:30:56.162197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.627 [2024-12-10 14:30:56.162206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.627 [2024-12-10 14:30:56.162213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.627 [2024-12-10 14:30:56.162228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.627 [2024-12-10 14:30:56.174270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.627 [2024-12-10 14:30:56.174691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.627 [2024-12-10 14:30:56.174734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.628 [2024-12-10 14:30:56.174758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.628 [2024-12-10 14:30:56.175278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.628 [2024-12-10 14:30:56.175442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.628 [2024-12-10 14:30:56.175450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.628 [2024-12-10 14:30:56.175456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.628 [2024-12-10 14:30:56.175462] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.628 [2024-12-10 14:30:56.187143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.628 [2024-12-10 14:30:56.187480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.628 [2024-12-10 14:30:56.187498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.628 [2024-12-10 14:30:56.187506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.628 [2024-12-10 14:30:56.187666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.628 [2024-12-10 14:30:56.187828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.628 [2024-12-10 14:30:56.187840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.628 [2024-12-10 14:30:56.187847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.628 [2024-12-10 14:30:56.187853] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.628 [2024-12-10 14:30:56.200033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.628 [2024-12-10 14:30:56.200449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.628 [2024-12-10 14:30:56.200467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.628 [2024-12-10 14:30:56.200475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.628 [2024-12-10 14:30:56.200644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.628 [2024-12-10 14:30:56.200815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.628 [2024-12-10 14:30:56.200825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.628 [2024-12-10 14:30:56.200831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.628 [2024-12-10 14:30:56.200837] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.628 [2024-12-10 14:30:56.212795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.628 [2024-12-10 14:30:56.213215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.628 [2024-12-10 14:30:56.213238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.628 [2024-12-10 14:30:56.213246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.628 [2024-12-10 14:30:56.213405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.628 [2024-12-10 14:30:56.213567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.628 [2024-12-10 14:30:56.213576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.628 [2024-12-10 14:30:56.213582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.628 [2024-12-10 14:30:56.213588] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.628 [2024-12-10 14:30:56.225632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.628 [2024-12-10 14:30:56.226061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.628 [2024-12-10 14:30:56.226105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.628 [2024-12-10 14:30:56.226129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.628 [2024-12-10 14:30:56.226735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.628 [2024-12-10 14:30:56.227305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.628 [2024-12-10 14:30:56.227315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.628 [2024-12-10 14:30:56.227322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.628 [2024-12-10 14:30:56.227331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.628 [2024-12-10 14:30:56.238494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.628 [2024-12-10 14:30:56.238836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.628 [2024-12-10 14:30:56.238853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.628 [2024-12-10 14:30:56.238860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.628 [2024-12-10 14:30:56.239021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.628 [2024-12-10 14:30:56.239182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.628 [2024-12-10 14:30:56.239191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.628 [2024-12-10 14:30:56.239197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.628 [2024-12-10 14:30:56.239203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.628 [2024-12-10 14:30:56.251385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.628 [2024-12-10 14:30:56.251797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.628 [2024-12-10 14:30:56.251814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.628 [2024-12-10 14:30:56.251821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.628 [2024-12-10 14:30:56.251981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.628 [2024-12-10 14:30:56.252143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.628 [2024-12-10 14:30:56.252152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.628 [2024-12-10 14:30:56.252159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.628 [2024-12-10 14:30:56.252165] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.628 [2024-12-10 14:30:56.264203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.628 [2024-12-10 14:30:56.264616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.628 [2024-12-10 14:30:56.264653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.628 [2024-12-10 14:30:56.264678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.628 [2024-12-10 14:30:56.265212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.628 [2024-12-10 14:30:56.265459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.628 [2024-12-10 14:30:56.265479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.628 [2024-12-10 14:30:56.265494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.628 [2024-12-10 14:30:56.265508] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.628 [2024-12-10 14:30:56.279212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.628 [2024-12-10 14:30:56.279754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.628 [2024-12-10 14:30:56.279811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.628 [2024-12-10 14:30:56.279834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.628 [2024-12-10 14:30:56.280364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.628 [2024-12-10 14:30:56.280624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.628 [2024-12-10 14:30:56.280638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.628 [2024-12-10 14:30:56.280648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.628 [2024-12-10 14:30:56.280658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.628 [2024-12-10 14:30:56.292165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.628 [2024-12-10 14:30:56.292600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.628 [2024-12-10 14:30:56.292647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.628 [2024-12-10 14:30:56.292671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.628 [2024-12-10 14:30:56.293159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.628 [2024-12-10 14:30:56.293336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.628 [2024-12-10 14:30:56.293345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.628 [2024-12-10 14:30:56.293352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.628 [2024-12-10 14:30:56.293357] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.628 [2024-12-10 14:30:56.305018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.628 [2024-12-10 14:30:56.305429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.628 [2024-12-10 14:30:56.305446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.628 [2024-12-10 14:30:56.305454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.628 [2024-12-10 14:30:56.305614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.629 [2024-12-10 14:30:56.305775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.629 [2024-12-10 14:30:56.305784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.629 [2024-12-10 14:30:56.305790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.629 [2024-12-10 14:30:56.305796] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.629 [2024-12-10 14:30:56.317845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.629 [2024-12-10 14:30:56.318271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.629 [2024-12-10 14:30:56.318317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.629 [2024-12-10 14:30:56.318343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.629 [2024-12-10 14:30:56.318723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.629 [2024-12-10 14:30:56.318886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.629 [2024-12-10 14:30:56.318896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.629 [2024-12-10 14:30:56.318902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.629 [2024-12-10 14:30:56.318908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.629 [2024-12-10 14:30:56.330642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.629 [2024-12-10 14:30:56.331052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.629 [2024-12-10 14:30:56.331096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.629 [2024-12-10 14:30:56.331120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.629 [2024-12-10 14:30:56.331640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.629 [2024-12-10 14:30:56.331803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.629 [2024-12-10 14:30:56.331813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.629 [2024-12-10 14:30:56.331819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.629 [2024-12-10 14:30:56.331825] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.629 [2024-12-10 14:30:56.343439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.629 [2024-12-10 14:30:56.343857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.629 [2024-12-10 14:30:56.343875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.629 [2024-12-10 14:30:56.343882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.629 [2024-12-10 14:30:56.344051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.629 [2024-12-10 14:30:56.344229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.629 [2024-12-10 14:30:56.344240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.629 [2024-12-10 14:30:56.344248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.629 [2024-12-10 14:30:56.344255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.629 [2024-12-10 14:30:56.356570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.629 [2024-12-10 14:30:56.356996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.629 [2024-12-10 14:30:56.357013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.629 [2024-12-10 14:30:56.357021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.629 [2024-12-10 14:30:56.357196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.629 [2024-12-10 14:30:56.357378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.629 [2024-12-10 14:30:56.357392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.629 [2024-12-10 14:30:56.357398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.629 [2024-12-10 14:30:56.357405] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.889 [2024-12-10 14:30:56.369754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.889 [2024-12-10 14:30:56.370160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.889 [2024-12-10 14:30:56.370177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.889 [2024-12-10 14:30:56.370185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.889 [2024-12-10 14:30:56.370364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.889 [2024-12-10 14:30:56.370539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.889 [2024-12-10 14:30:56.370550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.889 [2024-12-10 14:30:56.370557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.889 [2024-12-10 14:30:56.370563] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.889 [2024-12-10 14:30:56.382718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.889 [2024-12-10 14:30:56.383132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.889 [2024-12-10 14:30:56.383149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.889 [2024-12-10 14:30:56.383156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.889 [2024-12-10 14:30:56.383325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.889 [2024-12-10 14:30:56.383487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.889 [2024-12-10 14:30:56.383496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.889 [2024-12-10 14:30:56.383502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.889 [2024-12-10 14:30:56.383508] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.889 [2024-12-10 14:30:56.395484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.889 [2024-12-10 14:30:56.395891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.889 [2024-12-10 14:30:56.395908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.889 [2024-12-10 14:30:56.395915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.889 [2024-12-10 14:30:56.396076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.889 [2024-12-10 14:30:56.396243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.889 [2024-12-10 14:30:56.396254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.889 [2024-12-10 14:30:56.396260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.889 [2024-12-10 14:30:56.396269] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.889 [2024-12-10 14:30:56.408297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.889 [2024-12-10 14:30:56.408710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.889 [2024-12-10 14:30:56.408727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.889 [2024-12-10 14:30:56.408734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.889 [2024-12-10 14:30:56.408894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.889 [2024-12-10 14:30:56.409055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.889 [2024-12-10 14:30:56.409065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.889 [2024-12-10 14:30:56.409071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.889 [2024-12-10 14:30:56.409077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.889 [2024-12-10 14:30:56.421126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.889 [2024-12-10 14:30:56.421457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.889 [2024-12-10 14:30:56.421474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.889 [2024-12-10 14:30:56.421481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.889 [2024-12-10 14:30:56.421641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.889 [2024-12-10 14:30:56.421802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.889 [2024-12-10 14:30:56.421811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.889 [2024-12-10 14:30:56.421818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.889 [2024-12-10 14:30:56.421824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.889 [2024-12-10 14:30:56.434007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.889 [2024-12-10 14:30:56.434440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.889 [2024-12-10 14:30:56.434484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.889 [2024-12-10 14:30:56.434508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.889 [2024-12-10 14:30:56.435003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.889 [2024-12-10 14:30:56.435165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.889 [2024-12-10 14:30:56.435174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.889 [2024-12-10 14:30:56.435180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.889 [2024-12-10 14:30:56.435187] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.889 [2024-12-10 14:30:56.446768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.889 [2024-12-10 14:30:56.447185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.889 [2024-12-10 14:30:56.447233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.889 [2024-12-10 14:30:56.447267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.889 [2024-12-10 14:30:56.447813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.889 [2024-12-10 14:30:56.447976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.889 [2024-12-10 14:30:56.447983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.889 [2024-12-10 14:30:56.447989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.889 [2024-12-10 14:30:56.447995] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.889 [2024-12-10 14:30:56.459583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.889 [2024-12-10 14:30:56.459972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.889 [2024-12-10 14:30:56.459988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.889 [2024-12-10 14:30:56.459995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.889 [2024-12-10 14:30:56.460156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.889 [2024-12-10 14:30:56.460324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.889 [2024-12-10 14:30:56.460334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.889 [2024-12-10 14:30:56.460341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.890 [2024-12-10 14:30:56.460347] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.890 [2024-12-10 14:30:56.472358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.890 [2024-12-10 14:30:56.472781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.890 [2024-12-10 14:30:56.472825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.890 [2024-12-10 14:30:56.472849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.890 [2024-12-10 14:30:56.473465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.890 [2024-12-10 14:30:56.473939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.890 [2024-12-10 14:30:56.473948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.890 [2024-12-10 14:30:56.473955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.890 [2024-12-10 14:30:56.473960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.890 [2024-12-10 14:30:56.485233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.890 [2024-12-10 14:30:56.485570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.890 [2024-12-10 14:30:56.485588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.890 [2024-12-10 14:30:56.485596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.890 [2024-12-10 14:30:56.485761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.890 [2024-12-10 14:30:56.485922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.890 [2024-12-10 14:30:56.485931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.890 [2024-12-10 14:30:56.485937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.890 [2024-12-10 14:30:56.485943] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.890 [2024-12-10 14:30:56.497983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.890 [2024-12-10 14:30:56.498394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.890 [2024-12-10 14:30:56.498411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.890 [2024-12-10 14:30:56.498419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.890 [2024-12-10 14:30:56.498580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.890 [2024-12-10 14:30:56.498741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.890 [2024-12-10 14:30:56.498750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.890 [2024-12-10 14:30:56.498756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.890 [2024-12-10 14:30:56.498763] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.890 [2024-12-10 14:30:56.510746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.890 [2024-12-10 14:30:56.511165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.890 [2024-12-10 14:30:56.511209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.890 [2024-12-10 14:30:56.511254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.890 [2024-12-10 14:30:56.511730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.890 [2024-12-10 14:30:56.511892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.890 [2024-12-10 14:30:56.511901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.890 [2024-12-10 14:30:56.511907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.890 [2024-12-10 14:30:56.511913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.890 [2024-12-10 14:30:56.523501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.890 [2024-12-10 14:30:56.523913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.890 [2024-12-10 14:30:56.523929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.890 [2024-12-10 14:30:56.523937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.890 [2024-12-10 14:30:56.524096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.890 [2024-12-10 14:30:56.524264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.890 [2024-12-10 14:30:56.524278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.890 [2024-12-10 14:30:56.524285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.890 [2024-12-10 14:30:56.524292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.890 [2024-12-10 14:30:56.536244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.890 [2024-12-10 14:30:56.536671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.890 [2024-12-10 14:30:56.536717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.890 [2024-12-10 14:30:56.536741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.890 [2024-12-10 14:30:56.537242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.890 [2024-12-10 14:30:56.537405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.890 [2024-12-10 14:30:56.537413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.890 [2024-12-10 14:30:56.537419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.890 [2024-12-10 14:30:56.537424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.890 [2024-12-10 14:30:56.549037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.890 [2024-12-10 14:30:56.549453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.890 [2024-12-10 14:30:56.549470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.890 [2024-12-10 14:30:56.549477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.890 [2024-12-10 14:30:56.549636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.890 [2024-12-10 14:30:56.549797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.890 [2024-12-10 14:30:56.549807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.890 [2024-12-10 14:30:56.549813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.890 [2024-12-10 14:30:56.549819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.890 [2024-12-10 14:30:56.561860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.890 [2024-12-10 14:30:56.562274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.890 [2024-12-10 14:30:56.562291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.890 [2024-12-10 14:30:56.562298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.890 [2024-12-10 14:30:56.562458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.890 [2024-12-10 14:30:56.562619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.890 [2024-12-10 14:30:56.562627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.890 [2024-12-10 14:30:56.562634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.890 [2024-12-10 14:30:56.562643] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.890 [2024-12-10 14:30:56.574695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.890 [2024-12-10 14:30:56.575054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.890 [2024-12-10 14:30:56.575100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.890 [2024-12-10 14:30:56.575124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.890 [2024-12-10 14:30:56.575748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.890 [2024-12-10 14:30:56.576321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.890 [2024-12-10 14:30:56.576332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.890 [2024-12-10 14:30:56.576339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.890 [2024-12-10 14:30:56.576345] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.890 [2024-12-10 14:30:56.587565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.890 [2024-12-10 14:30:56.587983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.890 [2024-12-10 14:30:56.588000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.890 [2024-12-10 14:30:56.588008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.890 [2024-12-10 14:30:56.588168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.890 [2024-12-10 14:30:56.588338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.890 [2024-12-10 14:30:56.588349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.890 [2024-12-10 14:30:56.588355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.890 [2024-12-10 14:30:56.588362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.890 [2024-12-10 14:30:56.600410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.891 [2024-12-10 14:30:56.600764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.891 [2024-12-10 14:30:56.600781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.891 [2024-12-10 14:30:56.600789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.891 [2024-12-10 14:30:56.600963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.891 [2024-12-10 14:30:56.601137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.891 [2024-12-10 14:30:56.601148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.891 [2024-12-10 14:30:56.601154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.891 [2024-12-10 14:30:56.601161] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.891 [2024-12-10 14:30:56.613596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.891 [2024-12-10 14:30:56.614033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.891 [2024-12-10 14:30:56.614050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:55.891 [2024-12-10 14:30:56.614058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:55.891 [2024-12-10 14:30:56.614240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:55.891 [2024-12-10 14:30:56.614416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.891 [2024-12-10 14:30:56.614426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.891 [2024-12-10 14:30:56.614433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.891 [2024-12-10 14:30:56.614440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.891 [2024-12-10 14:30:56.626649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.150 [2024-12-10 14:30:56.627023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-12-10 14:30:56.627041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.151 [2024-12-10 14:30:56.627048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.151 [2024-12-10 14:30:56.627229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.151 [2024-12-10 14:30:56.627406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.151 [2024-12-10 14:30:56.627416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.151 [2024-12-10 14:30:56.627422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.151 [2024-12-10 14:30:56.627429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.151 [2024-12-10 14:30:56.639696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.151 [2024-12-10 14:30:56.640120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-12-10 14:30:56.640165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.151 [2024-12-10 14:30:56.640191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.151 [2024-12-10 14:30:56.640792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.151 [2024-12-10 14:30:56.641310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.151 [2024-12-10 14:30:56.641321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.151 [2024-12-10 14:30:56.641328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.151 [2024-12-10 14:30:56.641334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.151 [2024-12-10 14:30:56.652455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.151 [2024-12-10 14:30:56.652899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-12-10 14:30:56.652946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.151 [2024-12-10 14:30:56.652970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.151 [2024-12-10 14:30:56.653344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.151 [2024-12-10 14:30:56.653509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.151 [2024-12-10 14:30:56.653518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.151 [2024-12-10 14:30:56.653525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.151 [2024-12-10 14:30:56.653530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.151 [2024-12-10 14:30:56.665500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.151 [2024-12-10 14:30:56.665852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-12-10 14:30:56.665870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.151 [2024-12-10 14:30:56.665878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.151 [2024-12-10 14:30:56.666051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.151 [2024-12-10 14:30:56.666234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.151 [2024-12-10 14:30:56.666244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.151 [2024-12-10 14:30:56.666251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.151 [2024-12-10 14:30:56.666258] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.151 [2024-12-10 14:30:56.678554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.151 [2024-12-10 14:30:56.678977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-12-10 14:30:56.678994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.151 [2024-12-10 14:30:56.679002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.151 [2024-12-10 14:30:56.679176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.151 [2024-12-10 14:30:56.679360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.151 [2024-12-10 14:30:56.679371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.151 [2024-12-10 14:30:56.679378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.151 [2024-12-10 14:30:56.679384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.151 [2024-12-10 14:30:56.691694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.151 [2024-12-10 14:30:56.692120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-12-10 14:30:56.692137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.151 [2024-12-10 14:30:56.692145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.151 [2024-12-10 14:30:56.692326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.151 [2024-12-10 14:30:56.692501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.151 [2024-12-10 14:30:56.692515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.151 [2024-12-10 14:30:56.692522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.151 [2024-12-10 14:30:56.692528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.151 [2024-12-10 14:30:56.704849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.151 [2024-12-10 14:30:56.705198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-12-10 14:30:56.705216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.151 [2024-12-10 14:30:56.705230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.151 [2024-12-10 14:30:56.705404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.151 [2024-12-10 14:30:56.705578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.151 [2024-12-10 14:30:56.705588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.151 [2024-12-10 14:30:56.705595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.151 [2024-12-10 14:30:56.705602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.151 [2024-12-10 14:30:56.718109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.151 [2024-12-10 14:30:56.718554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-12-10 14:30:56.718573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.151 [2024-12-10 14:30:56.718581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.151 [2024-12-10 14:30:56.718772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.151 [2024-12-10 14:30:56.718958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.151 [2024-12-10 14:30:56.718968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.151 [2024-12-10 14:30:56.718976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.151 [2024-12-10 14:30:56.718983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.151 [2024-12-10 14:30:56.731411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.151 [2024-12-10 14:30:56.731780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-12-10 14:30:56.731799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.151 [2024-12-10 14:30:56.731808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.151 [2024-12-10 14:30:56.731992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.151 [2024-12-10 14:30:56.732180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.151 [2024-12-10 14:30:56.732190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.151 [2024-12-10 14:30:56.732198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.151 [2024-12-10 14:30:56.732209] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.151 [2024-12-10 14:30:56.744654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.151 [2024-12-10 14:30:56.745107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-12-10 14:30:56.745124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.151 [2024-12-10 14:30:56.745133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.151 [2024-12-10 14:30:56.745325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.151 [2024-12-10 14:30:56.745512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.151 [2024-12-10 14:30:56.745522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.151 [2024-12-10 14:30:56.745531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.151 [2024-12-10 14:30:56.745538] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.151 [2024-12-10 14:30:56.757991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.151 [2024-12-10 14:30:56.758414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-12-10 14:30:56.758433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.152 [2024-12-10 14:30:56.758442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.152 [2024-12-10 14:30:56.758628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.152 [2024-12-10 14:30:56.758815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.152 [2024-12-10 14:30:56.758826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.152 [2024-12-10 14:30:56.758833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.152 [2024-12-10 14:30:56.758841] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.152 [2024-12-10 14:30:56.771345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.152 [2024-12-10 14:30:56.771678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-12-10 14:30:56.771696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.152 [2024-12-10 14:30:56.771704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.152 [2024-12-10 14:30:56.771890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.152 [2024-12-10 14:30:56.772077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.152 [2024-12-10 14:30:56.772087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.152 [2024-12-10 14:30:56.772094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.152 [2024-12-10 14:30:56.772101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.152 [2024-12-10 14:30:56.784672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.152 [2024-12-10 14:30:56.785127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-12-10 14:30:56.785145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.152 [2024-12-10 14:30:56.785154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.152 [2024-12-10 14:30:56.785358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.152 [2024-12-10 14:30:56.785558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.152 [2024-12-10 14:30:56.785569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.152 [2024-12-10 14:30:56.785576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.152 [2024-12-10 14:30:56.785584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.152 [2024-12-10 14:30:56.797858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.152 [2024-12-10 14:30:56.798302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-12-10 14:30:56.798321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.152 [2024-12-10 14:30:56.798329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.152 [2024-12-10 14:30:56.798514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.152 [2024-12-10 14:30:56.798702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.152 [2024-12-10 14:30:56.798712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.152 [2024-12-10 14:30:56.798720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.152 [2024-12-10 14:30:56.798728] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.152 [2024-12-10 14:30:56.811060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.152 [2024-12-10 14:30:56.811485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-12-10 14:30:56.811503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.152 [2024-12-10 14:30:56.811512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.152 [2024-12-10 14:30:56.811698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.152 [2024-12-10 14:30:56.811885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.152 [2024-12-10 14:30:56.811895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.152 [2024-12-10 14:30:56.811903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.152 [2024-12-10 14:30:56.811909] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.152 [2024-12-10 14:30:56.824345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.152 [2024-12-10 14:30:56.824758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-12-10 14:30:56.824775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.152 [2024-12-10 14:30:56.824783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.152 [2024-12-10 14:30:56.824972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.152 [2024-12-10 14:30:56.825160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.152 [2024-12-10 14:30:56.825170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.152 [2024-12-10 14:30:56.825179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.152 [2024-12-10 14:30:56.825187] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.152 [2024-12-10 14:30:56.837643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.152 [2024-12-10 14:30:56.838107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-12-10 14:30:56.838126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.152 [2024-12-10 14:30:56.838135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.152 [2024-12-10 14:30:56.838340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.152 [2024-12-10 14:30:56.838541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.152 [2024-12-10 14:30:56.838551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.152 [2024-12-10 14:30:56.838559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.152 [2024-12-10 14:30:56.838566] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.152 [2024-12-10 14:30:56.850964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.152 [2024-12-10 14:30:56.851355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-12-10 14:30:56.851374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.152 [2024-12-10 14:30:56.851383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.152 [2024-12-10 14:30:56.851568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.152 [2024-12-10 14:30:56.851754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.152 [2024-12-10 14:30:56.851764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.152 [2024-12-10 14:30:56.851772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.152 [2024-12-10 14:30:56.851779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.152 [2024-12-10 14:30:56.864558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.152 [2024-12-10 14:30:56.864989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-12-10 14:30:56.865007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.152 [2024-12-10 14:30:56.865016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.152 [2024-12-10 14:30:56.865213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.152 [2024-12-10 14:30:56.865428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.152 [2024-12-10 14:30:56.865442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.152 [2024-12-10 14:30:56.865449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.152 [2024-12-10 14:30:56.865457] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.152 [2024-12-10 14:30:56.877781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.152 [2024-12-10 14:30:56.878197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-12-10 14:30:56.878215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.152 [2024-12-10 14:30:56.878230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.152 [2024-12-10 14:30:56.878426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.152 [2024-12-10 14:30:56.878603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.152 [2024-12-10 14:30:56.878613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.152 [2024-12-10 14:30:56.878619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.152 [2024-12-10 14:30:56.878626] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.413 [2024-12-10 14:30:56.891179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.413 [2024-12-10 14:30:56.891626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.413 [2024-12-10 14:30:56.891644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.413 [2024-12-10 14:30:56.891652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.413 [2024-12-10 14:30:56.891838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.413 [2024-12-10 14:30:56.892026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.413 [2024-12-10 14:30:56.892035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.413 [2024-12-10 14:30:56.892043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.413 [2024-12-10 14:30:56.892051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.413 [2024-12-10 14:30:56.904277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.413 [2024-12-10 14:30:56.904686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.413 [2024-12-10 14:30:56.904703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.413 [2024-12-10 14:30:56.904711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.413 [2024-12-10 14:30:56.904886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.413 [2024-12-10 14:30:56.905061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.413 [2024-12-10 14:30:56.905071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.413 [2024-12-10 14:30:56.905077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.413 [2024-12-10 14:30:56.905088] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.413 [2024-12-10 14:30:56.917332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.413 [2024-12-10 14:30:56.917763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.413 [2024-12-10 14:30:56.917780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.413 [2024-12-10 14:30:56.917788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.413 [2024-12-10 14:30:56.917962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.413 [2024-12-10 14:30:56.918138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.413 [2024-12-10 14:30:56.918148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.413 [2024-12-10 14:30:56.918155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.413 [2024-12-10 14:30:56.918162] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.413 [2024-12-10 14:30:56.930364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.413 [2024-12-10 14:30:56.930774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.413 [2024-12-10 14:30:56.930791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.413 [2024-12-10 14:30:56.930799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.413 [2024-12-10 14:30:56.930974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.413 [2024-12-10 14:30:56.931150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.413 [2024-12-10 14:30:56.931160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.413 [2024-12-10 14:30:56.931167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.413 [2024-12-10 14:30:56.931174] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.413 [2024-12-10 14:30:56.943453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.413 [2024-12-10 14:30:56.943749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.413 [2024-12-10 14:30:56.943767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.413 [2024-12-10 14:30:56.943775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.413 [2024-12-10 14:30:56.943945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.413 [2024-12-10 14:30:56.944116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.413 [2024-12-10 14:30:56.944125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.413 [2024-12-10 14:30:56.944132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.413 [2024-12-10 14:30:56.944139] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.413 [2024-12-10 14:30:56.956437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.413 [2024-12-10 14:30:56.956849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.413 [2024-12-10 14:30:56.956892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.413 [2024-12-10 14:30:56.956915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.413 [2024-12-10 14:30:56.957326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.413 [2024-12-10 14:30:56.957499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.413 [2024-12-10 14:30:56.957508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.413 [2024-12-10 14:30:56.957515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.413 [2024-12-10 14:30:56.957523] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.413 [2024-12-10 14:30:56.969509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.413 [2024-12-10 14:30:56.969799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.413 [2024-12-10 14:30:56.969843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.413 [2024-12-10 14:30:56.969866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.413 [2024-12-10 14:30:56.970376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.413 [2024-12-10 14:30:56.970539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.413 [2024-12-10 14:30:56.970549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.413 [2024-12-10 14:30:56.970555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.413 [2024-12-10 14:30:56.970561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.413 [2024-12-10 14:30:56.982373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.413 [2024-12-10 14:30:56.982734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.413 [2024-12-10 14:30:56.982751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.413 [2024-12-10 14:30:56.982758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.413 [2024-12-10 14:30:56.982920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.413 [2024-12-10 14:30:56.983081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.413 [2024-12-10 14:30:56.983090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.413 [2024-12-10 14:30:56.983097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.413 [2024-12-10 14:30:56.983103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.413 [2024-12-10 14:30:56.995274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.413 [2024-12-10 14:30:56.995668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.413 [2024-12-10 14:30:56.995684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.413 [2024-12-10 14:30:56.995692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.413 [2024-12-10 14:30:56.995856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.413 [2024-12-10 14:30:56.996017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.413 [2024-12-10 14:30:56.996026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.413 [2024-12-10 14:30:56.996033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.413 [2024-12-10 14:30:56.996039] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.414 [2024-12-10 14:30:57.008058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.414 [2024-12-10 14:30:57.008352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.414 [2024-12-10 14:30:57.008368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.414 [2024-12-10 14:30:57.008377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.414 [2024-12-10 14:30:57.008538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.414 [2024-12-10 14:30:57.008698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.414 [2024-12-10 14:30:57.008707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.414 [2024-12-10 14:30:57.008714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.414 [2024-12-10 14:30:57.008720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.414 [2024-12-10 14:30:57.020825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.414 [2024-12-10 14:30:57.021149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.414 [2024-12-10 14:30:57.021166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.414 [2024-12-10 14:30:57.021173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.414 [2024-12-10 14:30:57.021345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.414 [2024-12-10 14:30:57.021507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.414 [2024-12-10 14:30:57.021517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.414 [2024-12-10 14:30:57.021523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.414 [2024-12-10 14:30:57.021529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.414 [2024-12-10 14:30:57.033596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.414 [2024-12-10 14:30:57.033961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.414 [2024-12-10 14:30:57.034005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.414 [2024-12-10 14:30:57.034029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.414 [2024-12-10 14:30:57.034631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.414 [2024-12-10 14:30:57.035235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.414 [2024-12-10 14:30:57.035271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.414 [2024-12-10 14:30:57.035293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.414 [2024-12-10 14:30:57.035324] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.414 [2024-12-10 14:30:57.046359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.414 [2024-12-10 14:30:57.046624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.414 [2024-12-10 14:30:57.046641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.414 [2024-12-10 14:30:57.046648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.414 [2024-12-10 14:30:57.046808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.414 [2024-12-10 14:30:57.046970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.414 [2024-12-10 14:30:57.046979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.414 [2024-12-10 14:30:57.046986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.414 [2024-12-10 14:30:57.046991] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.414 [2024-12-10 14:30:57.059246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.414 [2024-12-10 14:30:57.059596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.414 [2024-12-10 14:30:57.059612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.414 [2024-12-10 14:30:57.059620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.414 [2024-12-10 14:30:57.059780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.414 [2024-12-10 14:30:57.059942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.414 [2024-12-10 14:30:57.059951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.414 [2024-12-10 14:30:57.059958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.414 [2024-12-10 14:30:57.059964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.414 5686.80 IOPS, 22.21 MiB/s [2024-12-10T13:30:57.154Z] [2024-12-10 14:30:57.073493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.414 [2024-12-10 14:30:57.073766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.414 [2024-12-10 14:30:57.073783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.414 [2024-12-10 14:30:57.073791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.414 [2024-12-10 14:30:57.073952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.414 [2024-12-10 14:30:57.074113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.414 [2024-12-10 14:30:57.074122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.414 [2024-12-10 14:30:57.074129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.414 [2024-12-10 14:30:57.074139] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.414 [2024-12-10 14:30:57.086318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.414 [2024-12-10 14:30:57.086661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.414 [2024-12-10 14:30:57.086678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.414 [2024-12-10 14:30:57.086685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.414 [2024-12-10 14:30:57.086845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.414 [2024-12-10 14:30:57.087006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.414 [2024-12-10 14:30:57.087015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.414 [2024-12-10 14:30:57.087021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.414 [2024-12-10 14:30:57.087028] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.414 [2024-12-10 14:30:57.099200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.414 [2024-12-10 14:30:57.099530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.414 [2024-12-10 14:30:57.099548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.414 [2024-12-10 14:30:57.099555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.414 [2024-12-10 14:30:57.099715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.414 [2024-12-10 14:30:57.099877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.414 [2024-12-10 14:30:57.099886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.414 [2024-12-10 14:30:57.099893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.414 [2024-12-10 14:30:57.099898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.414 [2024-12-10 14:30:57.112387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.414 [2024-12-10 14:30:57.112826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.414 [2024-12-10 14:30:57.112843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.414 [2024-12-10 14:30:57.112851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.414 [2024-12-10 14:30:57.113027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.414 [2024-12-10 14:30:57.113202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.414 [2024-12-10 14:30:57.113212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.414 [2024-12-10 14:30:57.113228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.414 [2024-12-10 14:30:57.113236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.414 [2024-12-10 14:30:57.125536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.414 [2024-12-10 14:30:57.125813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.414 [2024-12-10 14:30:57.125831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.414 [2024-12-10 14:30:57.125840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.414 [2024-12-10 14:30:57.126013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.414 [2024-12-10 14:30:57.126190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.414 [2024-12-10 14:30:57.126199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.414 [2024-12-10 14:30:57.126206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.414 [2024-12-10 14:30:57.126213] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.414 [2024-12-10 14:30:57.138516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.414 [2024-12-10 14:30:57.138931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.415 [2024-12-10 14:30:57.138948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.415 [2024-12-10 14:30:57.138955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.415 [2024-12-10 14:30:57.139116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.415 [2024-12-10 14:30:57.139303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.415 [2024-12-10 14:30:57.139313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.415 [2024-12-10 14:30:57.139320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.415 [2024-12-10 14:30:57.139326] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.675 [2024-12-10 14:30:57.151583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.675 [2024-12-10 14:30:57.151959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.675 [2024-12-10 14:30:57.151976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.675 [2024-12-10 14:30:57.151983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.675 [2024-12-10 14:30:57.152152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.675 [2024-12-10 14:30:57.152347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.675 [2024-12-10 14:30:57.152358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.675 [2024-12-10 14:30:57.152364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.675 [2024-12-10 14:30:57.152371] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.675 [2024-12-10 14:30:57.164372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.675 [2024-12-10 14:30:57.164767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.675 [2024-12-10 14:30:57.164783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.675 [2024-12-10 14:30:57.164790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.675 [2024-12-10 14:30:57.164954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.675 [2024-12-10 14:30:57.165115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.675 [2024-12-10 14:30:57.165124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.675 [2024-12-10 14:30:57.165131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.675 [2024-12-10 14:30:57.165137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.675 [2024-12-10 14:30:57.177201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.675 [2024-12-10 14:30:57.177626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.675 [2024-12-10 14:30:57.177669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.675 [2024-12-10 14:30:57.177694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.675 [2024-12-10 14:30:57.178295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.675 [2024-12-10 14:30:57.178529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.675 [2024-12-10 14:30:57.178539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.675 [2024-12-10 14:30:57.178545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.675 [2024-12-10 14:30:57.178551] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.675 [2024-12-10 14:30:57.189982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.675 [2024-12-10 14:30:57.190376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.675 [2024-12-10 14:30:57.190394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.675 [2024-12-10 14:30:57.190402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.675 [2024-12-10 14:30:57.190563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.675 [2024-12-10 14:30:57.190725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.675 [2024-12-10 14:30:57.190734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.675 [2024-12-10 14:30:57.190740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.675 [2024-12-10 14:30:57.190746] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.675 [2024-12-10 14:30:57.202908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.675 [2024-12-10 14:30:57.203309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.675 [2024-12-10 14:30:57.203354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.675 [2024-12-10 14:30:57.203377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.675 [2024-12-10 14:30:57.203814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.675 [2024-12-10 14:30:57.203976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.675 [2024-12-10 14:30:57.203989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.675 [2024-12-10 14:30:57.203995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.675 [2024-12-10 14:30:57.204001] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.675 [2024-12-10 14:30:57.215769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.675 [2024-12-10 14:30:57.216160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.675 [2024-12-10 14:30:57.216197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.675 [2024-12-10 14:30:57.216235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.675 [2024-12-10 14:30:57.216749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.675 [2024-12-10 14:30:57.216920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.675 [2024-12-10 14:30:57.216930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.675 [2024-12-10 14:30:57.216936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.675 [2024-12-10 14:30:57.216942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.675 [2024-12-10 14:30:57.228519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.675 [2024-12-10 14:30:57.228878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.675 [2024-12-10 14:30:57.228894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.675 [2024-12-10 14:30:57.228901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.675 [2024-12-10 14:30:57.229061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.675 [2024-12-10 14:30:57.229231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.675 [2024-12-10 14:30:57.229240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.675 [2024-12-10 14:30:57.229263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.675 [2024-12-10 14:30:57.229270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.675 [2024-12-10 14:30:57.241377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.676 [2024-12-10 14:30:57.241779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.676 [2024-12-10 14:30:57.241823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.676 [2024-12-10 14:30:57.241847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.676 [2024-12-10 14:30:57.242446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.676 [2024-12-10 14:30:57.242904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.676 [2024-12-10 14:30:57.242913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.676 [2024-12-10 14:30:57.242920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.676 [2024-12-10 14:30:57.242931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.676 [2024-12-10 14:30:57.254246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.676 [2024-12-10 14:30:57.254593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.676 [2024-12-10 14:30:57.254609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.676 [2024-12-10 14:30:57.254616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.676 [2024-12-10 14:30:57.254776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.676 [2024-12-10 14:30:57.254938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.676 [2024-12-10 14:30:57.254947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.676 [2024-12-10 14:30:57.254954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.676 [2024-12-10 14:30:57.254961] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.676 [2024-12-10 14:30:57.267081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.676 [2024-12-10 14:30:57.267482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.676 [2024-12-10 14:30:57.267500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.676 [2024-12-10 14:30:57.267508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.676 [2024-12-10 14:30:57.267668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.676 [2024-12-10 14:30:57.267830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.676 [2024-12-10 14:30:57.267839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.676 [2024-12-10 14:30:57.267845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.676 [2024-12-10 14:30:57.267851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.676 [2024-12-10 14:30:57.279928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.676 [2024-12-10 14:30:57.280291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.676 [2024-12-10 14:30:57.280308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.676 [2024-12-10 14:30:57.280315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.676 [2024-12-10 14:30:57.280475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.676 [2024-12-10 14:30:57.280636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.676 [2024-12-10 14:30:57.280646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.676 [2024-12-10 14:30:57.280652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.676 [2024-12-10 14:30:57.280658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.676 [2024-12-10 14:30:57.292738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.676 [2024-12-10 14:30:57.293098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.676 [2024-12-10 14:30:57.293141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.676 [2024-12-10 14:30:57.293165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.676 [2024-12-10 14:30:57.293676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.676 [2024-12-10 14:30:57.293848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.676 [2024-12-10 14:30:57.293857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.676 [2024-12-10 14:30:57.293864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.676 [2024-12-10 14:30:57.293870] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.676 [2024-12-10 14:30:57.305526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.676 [2024-12-10 14:30:57.305944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.676 [2024-12-10 14:30:57.305994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.676 [2024-12-10 14:30:57.306019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.676 [2024-12-10 14:30:57.306585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.676 [2024-12-10 14:30:57.306980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.676 [2024-12-10 14:30:57.306998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.676 [2024-12-10 14:30:57.307012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.676 [2024-12-10 14:30:57.307026] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.676 [2024-12-10 14:30:57.320246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.676 [2024-12-10 14:30:57.320765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.676 [2024-12-10 14:30:57.320787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.676 [2024-12-10 14:30:57.320798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.676 [2024-12-10 14:30:57.321054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.676 [2024-12-10 14:30:57.321318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.676 [2024-12-10 14:30:57.321332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.676 [2024-12-10 14:30:57.321342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.676 [2024-12-10 14:30:57.321352] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.676 [2024-12-10 14:30:57.333297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.676 [2024-12-10 14:30:57.333741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.676 [2024-12-10 14:30:57.333785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.676 [2024-12-10 14:30:57.333810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.676 [2024-12-10 14:30:57.334273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.676 [2024-12-10 14:30:57.334450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.676 [2024-12-10 14:30:57.334460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.676 [2024-12-10 14:30:57.334467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.676 [2024-12-10 14:30:57.334474] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.676 [2024-12-10 14:30:57.346080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.676 [2024-12-10 14:30:57.346491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.676 [2024-12-10 14:30:57.346508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.676 [2024-12-10 14:30:57.346516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.676 [2024-12-10 14:30:57.346677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.676 [2024-12-10 14:30:57.346837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.676 [2024-12-10 14:30:57.346847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.676 [2024-12-10 14:30:57.346853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.676 [2024-12-10 14:30:57.346860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.676 [2024-12-10 14:30:57.358932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.676 [2024-12-10 14:30:57.359328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.676 [2024-12-10 14:30:57.359374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.676 [2024-12-10 14:30:57.359397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.676 [2024-12-10 14:30:57.359982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.676 [2024-12-10 14:30:57.360172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.676 [2024-12-10 14:30:57.360181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.676 [2024-12-10 14:30:57.360187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.676 [2024-12-10 14:30:57.360194] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.676 [2024-12-10 14:30:57.371779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.676 [2024-12-10 14:30:57.372208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.676 [2024-12-10 14:30:57.372229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.677 [2024-12-10 14:30:57.372237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.677 [2024-12-10 14:30:57.372427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.677 [2024-12-10 14:30:57.372603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.677 [2024-12-10 14:30:57.372616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.677 [2024-12-10 14:30:57.372624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.677 [2024-12-10 14:30:57.372631] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.677 [2024-12-10 14:30:57.384862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.677 [2024-12-10 14:30:57.385293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.677 [2024-12-10 14:30:57.385312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.677 [2024-12-10 14:30:57.385320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.677 [2024-12-10 14:30:57.385495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.677 [2024-12-10 14:30:57.385670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.677 [2024-12-10 14:30:57.385680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.677 [2024-12-10 14:30:57.385687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.677 [2024-12-10 14:30:57.385694] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.677 [2024-12-10 14:30:57.397668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.677 [2024-12-10 14:30:57.398078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.677 [2024-12-10 14:30:57.398095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.677 [2024-12-10 14:30:57.398102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.677 [2024-12-10 14:30:57.398267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.677 [2024-12-10 14:30:57.398453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.677 [2024-12-10 14:30:57.398463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.677 [2024-12-10 14:30:57.398469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.677 [2024-12-10 14:30:57.398475] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.677 [2024-12-10 14:30:57.410808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.677 [2024-12-10 14:30:57.411235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.677 [2024-12-10 14:30:57.411275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.677 [2024-12-10 14:30:57.411301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.677 [2024-12-10 14:30:57.411887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.677 [2024-12-10 14:30:57.412116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.677 [2024-12-10 14:30:57.412126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.677 [2024-12-10 14:30:57.412133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.677 [2024-12-10 14:30:57.412153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.937 [2024-12-10 14:30:57.423618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.937 [2024-12-10 14:30:57.424040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.937 [2024-12-10 14:30:57.424084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.937 [2024-12-10 14:30:57.424108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.937 [2024-12-10 14:30:57.424709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.937 [2024-12-10 14:30:57.425253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.937 [2024-12-10 14:30:57.425263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.937 [2024-12-10 14:30:57.425269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.937 [2024-12-10 14:30:57.425276] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.937 [2024-12-10 14:30:57.436400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.938 [2024-12-10 14:30:57.436821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.938 [2024-12-10 14:30:57.436865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.938 [2024-12-10 14:30:57.436889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.938 [2024-12-10 14:30:57.437363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.938 [2024-12-10 14:30:57.437535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.938 [2024-12-10 14:30:57.437545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.938 [2024-12-10 14:30:57.437553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.938 [2024-12-10 14:30:57.437559] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.938 [2024-12-10 14:30:57.449148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.938 [2024-12-10 14:30:57.449492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.938 [2024-12-10 14:30:57.449510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.938 [2024-12-10 14:30:57.449517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.938 [2024-12-10 14:30:57.449678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.938 [2024-12-10 14:30:57.449839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.938 [2024-12-10 14:30:57.449848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.938 [2024-12-10 14:30:57.449855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.938 [2024-12-10 14:30:57.449861] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.938 [2024-12-10 14:30:57.461988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.938 [2024-12-10 14:30:57.462404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.938 [2024-12-10 14:30:57.462421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.938 [2024-12-10 14:30:57.462428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.938 [2024-12-10 14:30:57.462589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.938 [2024-12-10 14:30:57.462751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.938 [2024-12-10 14:30:57.462761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.938 [2024-12-10 14:30:57.462767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.938 [2024-12-10 14:30:57.462773] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.938 [2024-12-10 14:30:57.474787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.938 [2024-12-10 14:30:57.475206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.938 [2024-12-10 14:30:57.475261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.938 [2024-12-10 14:30:57.475285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.938 [2024-12-10 14:30:57.475871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.938 [2024-12-10 14:30:57.476373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.938 [2024-12-10 14:30:57.476384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.938 [2024-12-10 14:30:57.476390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.938 [2024-12-10 14:30:57.476396] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.938 [2024-12-10 14:30:57.487534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.938 [2024-12-10 14:30:57.487955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.938 [2024-12-10 14:30:57.487974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.938 [2024-12-10 14:30:57.487981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.938 [2024-12-10 14:30:57.488142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.938 [2024-12-10 14:30:57.488327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.938 [2024-12-10 14:30:57.488337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.938 [2024-12-10 14:30:57.488344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.938 [2024-12-10 14:30:57.488351] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.938 [2024-12-10 14:30:57.500298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.938 [2024-12-10 14:30:57.500638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.938 [2024-12-10 14:30:57.500681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.938 [2024-12-10 14:30:57.500705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.938 [2024-12-10 14:30:57.501313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.938 [2024-12-10 14:30:57.501680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.938 [2024-12-10 14:30:57.501690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.938 [2024-12-10 14:30:57.501696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.938 [2024-12-10 14:30:57.501702] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.938 [2024-12-10 14:30:57.513134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.938 [2024-12-10 14:30:57.513544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.938 [2024-12-10 14:30:57.513583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.938 [2024-12-10 14:30:57.513608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.938 [2024-12-10 14:30:57.514192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.938 [2024-12-10 14:30:57.514411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.938 [2024-12-10 14:30:57.514421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.938 [2024-12-10 14:30:57.514428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.938 [2024-12-10 14:30:57.514434] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.938 [2024-12-10 14:30:57.525940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.938 [2024-12-10 14:30:57.526344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.938 [2024-12-10 14:30:57.526388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.938 [2024-12-10 14:30:57.526412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.938 [2024-12-10 14:30:57.526998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.938 [2024-12-10 14:30:57.527595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.938 [2024-12-10 14:30:57.527605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.938 [2024-12-10 14:30:57.527612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.938 [2024-12-10 14:30:57.527619] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1808766 Killed "${NVMF_APP[@]}" "$@" 00:28:56.938 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:56.938 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:56.938 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:56.938 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:56.938 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.938 [2024-12-10 14:30:57.538939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.938 [2024-12-10 14:30:57.539276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.938 [2024-12-10 14:30:57.539296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.938 [2024-12-10 14:30:57.539304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.938 [2024-12-10 14:30:57.539479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.938 [2024-12-10 14:30:57.539654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.938 [2024-12-10 14:30:57.539664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.938 [2024-12-10 14:30:57.539671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.938 [2024-12-10 14:30:57.539677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.938 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1810156 00:28:56.938 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1810156 00:28:56.938 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:56.938 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1810156 ']' 00:28:56.938 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.938 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.938 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.939 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.939 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.939 [2024-12-10 14:30:57.551961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.939 [2024-12-10 14:30:57.552382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.939 [2024-12-10 14:30:57.552399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.939 [2024-12-10 14:30:57.552408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.939 [2024-12-10 14:30:57.552582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.939 [2024-12-10 14:30:57.552757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.939 [2024-12-10 14:30:57.552766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.939 [2024-12-10 14:30:57.552773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.939 [2024-12-10 14:30:57.552780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.939 [2024-12-10 14:30:57.565058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.939 [2024-12-10 14:30:57.565487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.939 [2024-12-10 14:30:57.565504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.939 [2024-12-10 14:30:57.565512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.939 [2024-12-10 14:30:57.565686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.939 [2024-12-10 14:30:57.565864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.939 [2024-12-10 14:30:57.565874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.939 [2024-12-10 14:30:57.565881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.939 [2024-12-10 14:30:57.565887] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.939 [2024-12-10 14:30:57.578177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.939 [2024-12-10 14:30:57.578606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.939 [2024-12-10 14:30:57.578624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.939 [2024-12-10 14:30:57.578633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.939 [2024-12-10 14:30:57.578808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.939 [2024-12-10 14:30:57.578985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.939 [2024-12-10 14:30:57.578995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.939 [2024-12-10 14:30:57.579002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.939 [2024-12-10 14:30:57.579009] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.939 [2024-12-10 14:30:57.586871] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:28:56.939 [2024-12-10 14:30:57.586912] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.939 [2024-12-10 14:30:57.591151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.939 [2024-12-10 14:30:57.591506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.939 [2024-12-10 14:30:57.591524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.939 [2024-12-10 14:30:57.591532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.939 [2024-12-10 14:30:57.591702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.939 [2024-12-10 14:30:57.591874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.939 [2024-12-10 14:30:57.591885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.939 [2024-12-10 14:30:57.591892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.939 [2024-12-10 14:30:57.591899] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.939 [2024-12-10 14:30:57.604206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.939 [2024-12-10 14:30:57.604638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.939 [2024-12-10 14:30:57.604655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.939 [2024-12-10 14:30:57.604663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.939 [2024-12-10 14:30:57.604841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.939 [2024-12-10 14:30:57.605015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.939 [2024-12-10 14:30:57.605024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.939 [2024-12-10 14:30:57.605031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.939 [2024-12-10 14:30:57.605038] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.939 [2024-12-10 14:30:57.617238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.939 [2024-12-10 14:30:57.617647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.939 [2024-12-10 14:30:57.617665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.939 [2024-12-10 14:30:57.617673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.939 [2024-12-10 14:30:57.617843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.939 [2024-12-10 14:30:57.618013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.939 [2024-12-10 14:30:57.618022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.939 [2024-12-10 14:30:57.618029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.939 [2024-12-10 14:30:57.618037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.939 [2024-12-10 14:30:57.630305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.939 [2024-12-10 14:30:57.630735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.939 [2024-12-10 14:30:57.630753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.939 [2024-12-10 14:30:57.630761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.939 [2024-12-10 14:30:57.630937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.939 [2024-12-10 14:30:57.631110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.939 [2024-12-10 14:30:57.631121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.939 [2024-12-10 14:30:57.631130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.939 [2024-12-10 14:30:57.631137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.939 [2024-12-10 14:30:57.643444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.939 [2024-12-10 14:30:57.643804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.939 [2024-12-10 14:30:57.643823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.939 [2024-12-10 14:30:57.643832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.939 [2024-12-10 14:30:57.644009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.939 [2024-12-10 14:30:57.644184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.939 [2024-12-10 14:30:57.644196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.939 [2024-12-10 14:30:57.644212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.939 [2024-12-10 14:30:57.644225] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.939 [2024-12-10 14:30:57.656512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.939 [2024-12-10 14:30:57.656849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.939 [2024-12-10 14:30:57.656867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.939 [2024-12-10 14:30:57.656875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.939 [2024-12-10 14:30:57.657050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.939 [2024-12-10 14:30:57.657231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.939 [2024-12-10 14:30:57.657242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.939 [2024-12-10 14:30:57.657249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.939 [2024-12-10 14:30:57.657255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.939 [2024-12-10 14:30:57.669539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.939 [2024-12-10 14:30:57.669825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.939 [2024-12-10 14:30:57.669842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:56.939 [2024-12-10 14:30:57.669851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:56.939 [2024-12-10 14:30:57.670025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:56.939 [2024-12-10 14:30:57.670062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:56.940 [2024-12-10 14:30:57.670200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.940 [2024-12-10 14:30:57.670210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.940 [2024-12-10 14:30:57.670224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.940 [2024-12-10 14:30:57.670231] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.200 [2024-12-10 14:30:57.682554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.200 [2024-12-10 14:30:57.683008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.200 [2024-12-10 14:30:57.683029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.200 [2024-12-10 14:30:57.683037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.200 [2024-12-10 14:30:57.683229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.200 [2024-12-10 14:30:57.683402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.200 [2024-12-10 14:30:57.683413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.200 [2024-12-10 14:30:57.683420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.200 [2024-12-10 14:30:57.683432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.200 [2024-12-10 14:30:57.695519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.200 [2024-12-10 14:30:57.695911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.200 [2024-12-10 14:30:57.695929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.200 [2024-12-10 14:30:57.695937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.200 [2024-12-10 14:30:57.696107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.200 [2024-12-10 14:30:57.696300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.200 [2024-12-10 14:30:57.696310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.200 [2024-12-10 14:30:57.696317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.200 [2024-12-10 14:30:57.696324] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.200 [2024-12-10 14:30:57.708536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.200 [2024-12-10 14:30:57.708957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.200 [2024-12-10 14:30:57.708974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.200 [2024-12-10 14:30:57.708982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.200 [2024-12-10 14:30:57.709152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.200 [2024-12-10 14:30:57.709345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.200 [2024-12-10 14:30:57.709355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.200 [2024-12-10 14:30:57.709362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.200 [2024-12-10 14:30:57.709369] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.200 [2024-12-10 14:30:57.710931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.200 [2024-12-10 14:30:57.710957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.200 [2024-12-10 14:30:57.710964] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.200 [2024-12-10 14:30:57.710970] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.200 [2024-12-10 14:30:57.710975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.200 [2024-12-10 14:30:57.712213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.200 [2024-12-10 14:30:57.712328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.200 [2024-12-10 14:30:57.712328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:57.200 [2024-12-10 14:30:57.721655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.200 [2024-12-10 14:30:57.722031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.200 [2024-12-10 14:30:57.722052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.200 [2024-12-10 14:30:57.722061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.200 [2024-12-10 14:30:57.722249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.200 [2024-12-10 14:30:57.722428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.200 [2024-12-10 14:30:57.722438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.200 [2024-12-10 14:30:57.722446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.200 [2024-12-10 14:30:57.722453] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.200 [2024-12-10 14:30:57.734750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.200 [2024-12-10 14:30:57.735145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.200 [2024-12-10 14:30:57.735165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.200 [2024-12-10 14:30:57.735173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.200 [2024-12-10 14:30:57.735354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.200 [2024-12-10 14:30:57.735532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.200 [2024-12-10 14:30:57.735541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.200 [2024-12-10 14:30:57.735549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.200 [2024-12-10 14:30:57.735557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.200 [2024-12-10 14:30:57.747855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.200 [2024-12-10 14:30:57.748311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.200 [2024-12-10 14:30:57.748333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.200 [2024-12-10 14:30:57.748343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.200 [2024-12-10 14:30:57.748519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.200 [2024-12-10 14:30:57.748697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.200 [2024-12-10 14:30:57.748707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.200 [2024-12-10 14:30:57.748716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.201 [2024-12-10 14:30:57.748723] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.201 [2024-12-10 14:30:57.760861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.201 [2024-12-10 14:30:57.761315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.201 [2024-12-10 14:30:57.761338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.201 [2024-12-10 14:30:57.761348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.201 [2024-12-10 14:30:57.761525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.201 [2024-12-10 14:30:57.761702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.201 [2024-12-10 14:30:57.761718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.201 [2024-12-10 14:30:57.761726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.201 [2024-12-10 14:30:57.761733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.201 [2024-12-10 14:30:57.773887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.201 [2024-12-10 14:30:57.774339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.201 [2024-12-10 14:30:57.774360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.201 [2024-12-10 14:30:57.774369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.201 [2024-12-10 14:30:57.774546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.201 [2024-12-10 14:30:57.774721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.201 [2024-12-10 14:30:57.774731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.201 [2024-12-10 14:30:57.774739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.201 [2024-12-10 14:30:57.774746] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.201 [2024-12-10 14:30:57.786889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.201 [2024-12-10 14:30:57.787265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.201 [2024-12-10 14:30:57.787283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.201 [2024-12-10 14:30:57.787291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.201 [2024-12-10 14:30:57.787466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.201 [2024-12-10 14:30:57.787642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.201 [2024-12-10 14:30:57.787652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.201 [2024-12-10 14:30:57.787659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.201 [2024-12-10 14:30:57.787666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.201 [2024-12-10 14:30:57.799971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.201 [2024-12-10 14:30:57.800401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.201 [2024-12-10 14:30:57.800419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.201 [2024-12-10 14:30:57.800428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.201 [2024-12-10 14:30:57.800603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.201 [2024-12-10 14:30:57.800781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.201 [2024-12-10 14:30:57.800791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.201 [2024-12-10 14:30:57.800799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.201 [2024-12-10 14:30:57.800806] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.201 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:57.201 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:57.201 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:57.201 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:57.201 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.201 [2024-12-10 14:30:57.813117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.201 [2024-12-10 14:30:57.813485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.201 [2024-12-10 14:30:57.813504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.201 [2024-12-10 14:30:57.813515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.201 [2024-12-10 14:30:57.813694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.201 [2024-12-10 14:30:57.813870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.201 [2024-12-10 14:30:57.813880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.201 [2024-12-10 14:30:57.813887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.201 [2024-12-10 14:30:57.813895] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.201 [2024-12-10 14:30:57.826207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.201 [2024-12-10 14:30:57.826548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.201 [2024-12-10 14:30:57.826566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.201 [2024-12-10 14:30:57.826575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.201 [2024-12-10 14:30:57.826750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.201 [2024-12-10 14:30:57.826926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.201 [2024-12-10 14:30:57.826936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.201 [2024-12-10 14:30:57.826944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.201 [2024-12-10 14:30:57.826951] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.201 [2024-12-10 14:30:57.839254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.201 [2024-12-10 14:30:57.839642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.201 [2024-12-10 14:30:57.839660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.201 [2024-12-10 14:30:57.839668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.201 [2024-12-10 14:30:57.839844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.201 [2024-12-10 14:30:57.840021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.201 [2024-12-10 14:30:57.840031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.201 [2024-12-10 14:30:57.840043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.201 [2024-12-10 14:30:57.840050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.201 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.201 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:57.201 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.201 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.201 [2024-12-10 14:30:57.848012] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.201 [2024-12-10 14:30:57.852354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.201 [2024-12-10 14:30:57.852691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.201 [2024-12-10 14:30:57.852708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.201 [2024-12-10 14:30:57.852716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.201 [2024-12-10 14:30:57.852891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.201 [2024-12-10 14:30:57.853065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.201 [2024-12-10 14:30:57.853075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.201 [2024-12-10 14:30:57.853082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.201 [2024-12-10 14:30:57.853089] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.201 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.201 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:57.201 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.201 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.201 [2024-12-10 14:30:57.865399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.201 [2024-12-10 14:30:57.865805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.201 [2024-12-10 14:30:57.865824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.201 [2024-12-10 14:30:57.865832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.201 [2024-12-10 14:30:57.866007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.201 [2024-12-10 14:30:57.866182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.201 [2024-12-10 14:30:57.866192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.201 [2024-12-10 14:30:57.866199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.202 [2024-12-10 14:30:57.866206] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.202 [2024-12-10 14:30:57.878541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.202 [2024-12-10 14:30:57.878934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.202 [2024-12-10 14:30:57.878952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.202 [2024-12-10 14:30:57.878966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.202 [2024-12-10 14:30:57.879142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.202 [2024-12-10 14:30:57.879324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.202 [2024-12-10 14:30:57.879334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.202 [2024-12-10 14:30:57.879341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.202 [2024-12-10 14:30:57.879348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.202 Malloc0 00:28:57.202 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.202 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:57.202 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.202 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.202 [2024-12-10 14:30:57.891653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.202 [2024-12-10 14:30:57.892063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.202 [2024-12-10 14:30:57.892081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.202 [2024-12-10 14:30:57.892089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.202 [2024-12-10 14:30:57.892273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.202 [2024-12-10 14:30:57.892449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.202 [2024-12-10 14:30:57.892460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.202 [2024-12-10 14:30:57.892467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.202 [2024-12-10 14:30:57.892474] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.202 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.202 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:57.202 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.202 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.202 [2024-12-10 14:30:57.904769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.202 [2024-12-10 14:30:57.905127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.202 [2024-12-10 14:30:57.905145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129caa0 with addr=10.0.0.2, port=4420 00:28:57.202 [2024-12-10 14:30:57.905153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129caa0 is same with the state(6) to be set 00:28:57.202 [2024-12-10 14:30:57.905337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129caa0 (9): Bad file descriptor 00:28:57.202 [2024-12-10 14:30:57.905513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.202 [2024-12-10 14:30:57.905523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.202 [2024-12-10 14:30:57.905530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.202 [2024-12-10 14:30:57.905543] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.202 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.202 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:57.202 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.202 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.202 [2024-12-10 14:30:57.912119] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.202 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.202 14:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1809248 00:28:57.202 [2024-12-10 14:30:57.917829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.461 [2024-12-10 14:30:57.989047] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:58.396 4876.00 IOPS, 19.05 MiB/s [2024-12-10T13:31:00.512Z] 5815.57 IOPS, 22.72 MiB/s [2024-12-10T13:31:01.446Z] 6531.00 IOPS, 25.51 MiB/s [2024-12-10T13:31:02.382Z] 7074.11 IOPS, 27.63 MiB/s [2024-12-10T13:31:03.318Z] 7524.40 IOPS, 29.39 MiB/s [2024-12-10T13:31:04.253Z] 7885.73 IOPS, 30.80 MiB/s [2024-12-10T13:31:05.198Z] 8178.25 IOPS, 31.95 MiB/s [2024-12-10T13:31:06.137Z] 8448.23 IOPS, 33.00 MiB/s [2024-12-10T13:31:07.513Z] 8663.29 IOPS, 33.84 MiB/s [2024-12-10T13:31:07.513Z] 8855.60 IOPS, 34.59 MiB/s 00:29:06.773 Latency(us) 00:29:06.773 [2024-12-10T13:31:07.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.773 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:06.773 Verification LBA range: start 0x0 length 0x4000 00:29:06.773 Nvme1n1 : 15.01 8857.69 34.60 11176.55 0.00 6369.30 425.20 23842.62 00:29:06.773 [2024-12-10T13:31:07.513Z] =================================================================================================================== 00:29:06.773 [2024-12-10T13:31:07.513Z] Total : 8857.69 34.60 11176.55 0.00 6369.30 425.20 23842.62 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:06.773 rmmod nvme_tcp 00:29:06.773 rmmod nvme_fabrics 00:29:06.773 rmmod nvme_keyring 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1810156 ']' 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1810156 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1810156 ']' 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1810156 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1810156 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1810156' 00:29:06.773 killing process with pid 1810156 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1810156 00:29:06.773 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1810156 00:29:07.033 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:07.033 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:07.033 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:07.033 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:07.033 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:07.033 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:07.033 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:07.033 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:07.033 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:07.033 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.033 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.033 14:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.937 14:31:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:08.937 00:29:08.937 real 0m27.719s 00:29:08.937 user 1m3.761s 00:29:08.937 sys 0m7.437s 00:29:08.937 14:31:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:08.937 14:31:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.937 ************************************ 00:29:08.937 END TEST nvmf_bdevperf 00:29:08.937 ************************************ 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.197 ************************************ 00:29:09.197 START TEST nvmf_target_disconnect 00:29:09.197 ************************************ 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:09.197 * Looking for test storage... 00:29:09.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:09.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.197 --rc genhtml_branch_coverage=1 00:29:09.197 --rc genhtml_function_coverage=1 00:29:09.197 --rc genhtml_legend=1 00:29:09.197 --rc geninfo_all_blocks=1 00:29:09.197 --rc geninfo_unexecuted_blocks=1 00:29:09.197 00:29:09.197 ' 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:09.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.197 --rc genhtml_branch_coverage=1 00:29:09.197 --rc genhtml_function_coverage=1 00:29:09.197 --rc genhtml_legend=1 00:29:09.197 --rc geninfo_all_blocks=1 00:29:09.197 --rc geninfo_unexecuted_blocks=1 00:29:09.197 00:29:09.197 ' 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:09.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.197 --rc genhtml_branch_coverage=1 00:29:09.197 --rc genhtml_function_coverage=1 00:29:09.197 --rc genhtml_legend=1 00:29:09.197 --rc geninfo_all_blocks=1 00:29:09.197 --rc geninfo_unexecuted_blocks=1 00:29:09.197 00:29:09.197 ' 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:09.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.197 --rc genhtml_branch_coverage=1 00:29:09.197 --rc genhtml_function_coverage=1 00:29:09.197 --rc genhtml_legend=1 00:29:09.197 --rc geninfo_all_blocks=1 00:29:09.197 --rc geninfo_unexecuted_blocks=1 00:29:09.197 00:29:09.197 ' 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:09.197 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.198 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.198 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.198 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:09.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:09.198 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:09.198 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:09.198 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:09.457 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:09.457 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:09.457 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:09.457 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:09.457 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:09.457 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.457 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:09.457 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:09.457 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:09.457 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.457 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.457 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.457 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:09.457 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:09.457 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:09.457 14:31:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:16.035 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:16.036 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:16.036 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:16.036 Found net devices under 0000:af:00.0: cvl_0_0 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:16.036 Found net devices under 0000:af:00.1: cvl_0_1 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:16.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:16.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:29:16.036 00:29:16.036 --- 10.0.0.2 ping statistics --- 00:29:16.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.036 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:16.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:16.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:29:16.036 00:29:16.036 --- 10.0.0.1 ping statistics --- 00:29:16.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.036 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:16.036 ************************************ 00:29:16.036 START TEST nvmf_target_disconnect_tc1 00:29:16.036 ************************************ 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:16.036 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:16.037 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:16.037 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:16.037 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:16.037 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:16.037 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:16.037 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:16.296 [2024-12-10 14:31:16.849478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.296 [2024-12-10 14:31:16.849520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6410 with addr=10.0.0.2, port=4420 00:29:16.296 [2024-12-10 14:31:16.849545] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:16.296 [2024-12-10 14:31:16.849554] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:16.296 [2024-12-10 14:31:16.849560] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:16.296 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:16.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:16.296 Initializing NVMe Controllers 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:16.296 00:29:16.296 real 0m0.125s 00:29:16.296 user 0m0.051s 00:29:16.296 sys 0m0.074s 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:16.296 ************************************ 00:29:16.296 END TEST nvmf_target_disconnect_tc1 00:29:16.296 ************************************ 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:16.296 ************************************ 00:29:16.296 START TEST nvmf_target_disconnect_tc2 00:29:16.296 ************************************ 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1815669 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1815669 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1815669 ']' 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:16.296 14:31:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.296 [2024-12-10 14:31:16.987872] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:29:16.296 [2024-12-10 14:31:16.987911] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.554 [2024-12-10 14:31:17.071575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:16.554 [2024-12-10 14:31:17.112013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.554 [2024-12-10 14:31:17.112051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.554 [2024-12-10 14:31:17.112058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.554 [2024-12-10 14:31:17.112064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.554 [2024-12-10 14:31:17.112071] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.554 [2024-12-10 14:31:17.113710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:16.554 [2024-12-10 14:31:17.113817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:16.554 [2024-12-10 14:31:17.113925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:16.554 [2024-12-10 14:31:17.113925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.554 Malloc0 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.554 [2024-12-10 14:31:17.274552] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.554 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.555 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:16.555 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.555 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.811 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.811 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:16.811 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.811 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.811 [2024-12-10 14:31:17.303560] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:16.811 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.811 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:16.811 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.811 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.811 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.811 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1815804 00:29:16.811 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:16.811 14:31:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.717 14:31:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1815669 00:29:18.717 14:31:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 [2024-12-10 14:31:19.332474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Read completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.717 Write completed with error (sct=0, sc=8) 00:29:18.717 starting I/O failed 00:29:18.718 [2024-12-10 14:31:19.332679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 [2024-12-10 14:31:19.332875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Read completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 Write completed with error (sct=0, sc=8) 00:29:18.718 starting I/O failed 00:29:18.718 [2024-12-10 14:31:19.333074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:18.718 [2024-12-10 14:31:19.333277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.718 [2024-12-10 14:31:19.333300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.718 qpair failed and we were unable to recover it. 00:29:18.718 [2024-12-10 14:31:19.333649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.718 [2024-12-10 14:31:19.333704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.718 qpair failed and we were unable to recover it. 00:29:18.718 [2024-12-10 14:31:19.333918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.718 [2024-12-10 14:31:19.333954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.718 qpair failed and we were unable to recover it. 00:29:18.718 [2024-12-10 14:31:19.334165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.718 [2024-12-10 14:31:19.334198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.718 qpair failed and we were unable to recover it. 00:29:18.718 [2024-12-10 14:31:19.334427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.718 [2024-12-10 14:31:19.334462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.718 qpair failed and we were unable to recover it. 00:29:18.718 [2024-12-10 14:31:19.334583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.718 [2024-12-10 14:31:19.334616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.718 qpair failed and we were unable to recover it. 00:29:18.718 [2024-12-10 14:31:19.334734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.718 [2024-12-10 14:31:19.334768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.718 qpair failed and we were unable to recover it. 00:29:18.718 [2024-12-10 14:31:19.334929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.718 [2024-12-10 14:31:19.334964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.718 qpair failed and we were unable to recover it. 00:29:18.718 [2024-12-10 14:31:19.335195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.718 [2024-12-10 14:31:19.335236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.718 qpair failed and we were unable to recover it. 00:29:18.718 [2024-12-10 14:31:19.335398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.718 [2024-12-10 14:31:19.335410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.718 qpair failed and we were unable to recover it. 00:29:18.718 [2024-12-10 14:31:19.335502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.718 [2024-12-10 14:31:19.335514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.718 qpair failed and we were unable to recover it. 00:29:18.718 [2024-12-10 14:31:19.335715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.718 [2024-12-10 14:31:19.335748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.718 qpair failed and we were unable to recover it. 00:29:18.718 [2024-12-10 14:31:19.335887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.718 [2024-12-10 14:31:19.335919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.718 qpair failed and we were unable to recover it. 00:29:18.718 [2024-12-10 14:31:19.336068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.718 [2024-12-10 14:31:19.336101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.718 qpair failed and we were unable to recover it. 00:29:18.718 [2024-12-10 14:31:19.336213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.718 [2024-12-10 14:31:19.336259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.336451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.336485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.336615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.336647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.336777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.336810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.337002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.337035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.337179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.337213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.337421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.337461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.337570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.337581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.337648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.337658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.337759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.337791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.337966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.337999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.338121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.338153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.338351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.338385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.338586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.338619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.338800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.338832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.338948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.338982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.339117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.339149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.339348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.339383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.339570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.339602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.339777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.339808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.339932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.339961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.340088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.340118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.340234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.340266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.340446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.340477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.340577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.340608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.340726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.340757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.340945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.340977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.341083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.341115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.341330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.341364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.341488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.341520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.341788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.341821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.341997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.342030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.342205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.342250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.342362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.342395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.342603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.342634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.342816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.342846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.342968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.342998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.343114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.343145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.343244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.719 [2024-12-10 14:31:19.343275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.719 qpair failed and we were unable to recover it. 00:29:18.719 [2024-12-10 14:31:19.343374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.343403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.343529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.343559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.343691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.343721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.343840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.343870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.343992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.344021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.344150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.344180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.344383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.344414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.344616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.344646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.344768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.344806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.344923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.344956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.345066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.345097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.345204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.345247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.345431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.345464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.345645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.345675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.345789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.345818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.345927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.345958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.346145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.346173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.346283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.346314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.346484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.346514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.346609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.346639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.346924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.346954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.347070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.347100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.347233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.347266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.347390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.347421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.347534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.347563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.347670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.347700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.347877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.347906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.348019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.348049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.348230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.348261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.348444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.348474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.348573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.348602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.348724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.348755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.348919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.348949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.349124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.349157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.349421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.349456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.349723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.349763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.349898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.349930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.350134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.350167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.350303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.720 [2024-12-10 14:31:19.350339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.720 qpair failed and we were unable to recover it. 00:29:18.720 [2024-12-10 14:31:19.350516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.350549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.350792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.350824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.350933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.350966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.351087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.351119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.351295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.351331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.351509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.351541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.351656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.351690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.351800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.351833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.352031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.352064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.352180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.352213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.352343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.352377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.352549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.352582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.352756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.352789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.352914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.352948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.353166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.353199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.353497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.353531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.353714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.353748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.354003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.354036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.354238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.354278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.354445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.354479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.354587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.354618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.354724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.354757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.354966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.354999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.355181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.355231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.355370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.355403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.355584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.355617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.355801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.355833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.356023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.356057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.356165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.356197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.356345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.356379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.356502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.356535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.356665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.356698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.356879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.356911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.357088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.357121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.357237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.357272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.357466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.357499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.357615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.357647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.721 [2024-12-10 14:31:19.357839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.721 [2024-12-10 14:31:19.357872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.721 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.358122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.358154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.358395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.358431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.358670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.358702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.358893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.358926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.359106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.359139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.359405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.359440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.359612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.359644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.359780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.359813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.360054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.360086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.360346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.360381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.360496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.360529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.360723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.360756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.360893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.360926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.361139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.361174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.361394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.361429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.361621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.361653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.361787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.361820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.362085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.362119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.362305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.362340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.362465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.362498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.362708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.362742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.362917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.362950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.363127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.363162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.363354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.363388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.363574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.363608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.363722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.363755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.363881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.363914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.364105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.364137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.364410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.364444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.364654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.364686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.364883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.364916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.365099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.365132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.365247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.722 [2024-12-10 14:31:19.365282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.722 qpair failed and we were unable to recover it. 00:29:18.722 [2024-12-10 14:31:19.365414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.365447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.365572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.365605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.365824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.365856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.365982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.366015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.366130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.366164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.366277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.366319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.366418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.366448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.366693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.366727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.366901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.366933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.367041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.367073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.367315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.367351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.367473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.367505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.367743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.367777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.367951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.367983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.368194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.368234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.368429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.368462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.368655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.368689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.368868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.368901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.369146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.369180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.369304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.369338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.369445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.369485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.369672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.369704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.369945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.369979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.370151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.370184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.370322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.370357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.370568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.370601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.370714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.370747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.370932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.370965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.371142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.371175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.371393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.371427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.371598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.371630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.371745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.371777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.371992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.372026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.372228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.372262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.372447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.372479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.372667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.372700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.372805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.372838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.372957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.372989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.373102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.723 [2024-12-10 14:31:19.373133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.723 qpair failed and we were unable to recover it. 00:29:18.723 [2024-12-10 14:31:19.373340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.373376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.373559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.373592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.373705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.373738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.373925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.373958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.374098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.374133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.374310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.374345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.374455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.374487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.374663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.374696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.374877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.374920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.375114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.375147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.375340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.375373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.375570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.375604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.375843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.375875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.376065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.376098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.376236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.376270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.376381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.376413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.376553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.376585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.376689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.376722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.376946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.376979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.377087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.377119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.377396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.377430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.377692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.377725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.377920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.377953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.378073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.378106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.378295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.378330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.378437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.378469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.378681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.378714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.378821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.378853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.379032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.379066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.379244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.379278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.379397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.379429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.379610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.379643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.379814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.379848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.380110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.380143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.380262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.380296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.380422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.380454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.380723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.380756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.380929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.380961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.381106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.381139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.724 [2024-12-10 14:31:19.381272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.724 [2024-12-10 14:31:19.381306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.724 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.381419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.381452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.381629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.381663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.381902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.381934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.382119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.382152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.382348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.382382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.382581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.382615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.382813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.382846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.383087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.383119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.383231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.383264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.383444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.383477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.383648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.383680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.383825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.383859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.383974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.384006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.384123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.384156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.384370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.384404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.384599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.384632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.384872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.384904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.385010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.385043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.385234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.385268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.385510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.385542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.385650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.385682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.385894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.385927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.386035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.386065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.386248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.386282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.386469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.386502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.386759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.386792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.386909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.386941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.387183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.387225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.387488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.387521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.387651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.387683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.387793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.387826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.387997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.388029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.388142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.388173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.388367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.388400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.388574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.388606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.388859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.388892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.389000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.389039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.389266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.389301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.389560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.389593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.725 qpair failed and we were unable to recover it. 00:29:18.725 [2024-12-10 14:31:19.389802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.725 [2024-12-10 14:31:19.389833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.389935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.389966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.390087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.390119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.390307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.390341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.390534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.390566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.390736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.390769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.390954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.390985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.391230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.391264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.391436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.391467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.391656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.391689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.391820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.391852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.392034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.392068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.392255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.392289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.392405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.392437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.392553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.392587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.392770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.392802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.392915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.392947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.393122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.393155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.393272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.393306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.393474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.393507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.393641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.393674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.393855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.393887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.393995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.394027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.394144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.394177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.394298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.394338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.394607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.394640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.394751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.394783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.394890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.394923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.395027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.395059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.395240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.395274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.395449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.395482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.395745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.395777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.395987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.396019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.396294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.396328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.726 qpair failed and we were unable to recover it. 00:29:18.726 [2024-12-10 14:31:19.396573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.726 [2024-12-10 14:31:19.396605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.396730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.396762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.396888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.396923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.397172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.397205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.397325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.397358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.397471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.397505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.397625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.397657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.397849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.397881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.398074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.398106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.398275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.398310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.398431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.398463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.398581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.398614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.398798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.398838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.399052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.399085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.399334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.399369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.399502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.399534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.399721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.399753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.399935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.399974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.400227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.400260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.400501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.400535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.400658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.400690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.400930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.400962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.401160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.401193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.401346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.401378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.401548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.401582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.401794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.401825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.402069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.402102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.402226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.402260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.402449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.402481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.402689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.402722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.402838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.402870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.402995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.403028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.403199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.403242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.403377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.403410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.403541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.403574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.403677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.403708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.403820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.403853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.404034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.404066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.404255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.404290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.404478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.727 [2024-12-10 14:31:19.404510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.727 qpair failed and we were unable to recover it. 00:29:18.727 [2024-12-10 14:31:19.404644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.404677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.404872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.404904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.405022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.405055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.405322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.405357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.405536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.405568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.405757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.405791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.406036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.406067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.406185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.406227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.406410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.406442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.406614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.406647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.406892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.406924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.407026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.407058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.407236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.407271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.407448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.407480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.407680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.407713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.407920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.407953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.408139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.408172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.408428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.408462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.408687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.408761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.408960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.408996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.409194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.409247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.409371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.409404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.409670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.409702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.409832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.409864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.410041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.410074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.410253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.410287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.410393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.410424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.410537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.410570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.410691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.410723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.410907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.410939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.411130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.411164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.411356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.411399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.411541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.411574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.411769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.411801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.411997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.412030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.412206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.412249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.412371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.412403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.412539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.412572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.412853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.412885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.728 qpair failed and we were unable to recover it. 00:29:18.728 [2024-12-10 14:31:19.413081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.728 [2024-12-10 14:31:19.413113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.413241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.413275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.413454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.413487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.413666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.413699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.413825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.413858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.414028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.414061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.414329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.414363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.414503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.414535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.414655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.414688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.414866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.414898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.415090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.415123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.415302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.415336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.415602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.415635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.415808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.415841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.415969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.416002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.416191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.416232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.416429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.416462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.416638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.416671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.416801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.416834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.416951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.416985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.417160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.417193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.417326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.417360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.417483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.417516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.417754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.417787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.417972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.418005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.418181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.418214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.418340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.418374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.418483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.418515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.418712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.418745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.418928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.418961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.419238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.419273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.419461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.419493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.419682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.419715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.419908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.419941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.420204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.420249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.420462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.420494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.420687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.420720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.420836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.420869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.421067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.421100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.729 [2024-12-10 14:31:19.421262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.729 [2024-12-10 14:31:19.421297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.729 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.421437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.421471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.421655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.421688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.421866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.421899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.422018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.422051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.422269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.422304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.422485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.422517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.422793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.422827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.423025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.423058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.423305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.423338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.423594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.423628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.423870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.423903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.424102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.424135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.424319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.424373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.424496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.424529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.424659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.424692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.424820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.424852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.425065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.425098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.425235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.425270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.425378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.425411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.425655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.425693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.425895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.425928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.426113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.426145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.426336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.426370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.426493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.426525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.426712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.426746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.426924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.426964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.427151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.427185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.427350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.427385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.427593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.427626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.427805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.427838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.428027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.428060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.428257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.428290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.428464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.428497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.428750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.428783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.428957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.428990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.429239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.429273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.730 qpair failed and we were unable to recover it. 00:29:18.730 [2024-12-10 14:31:19.429445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.730 [2024-12-10 14:31:19.429478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.429686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.429719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.429854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.429887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.430004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.430037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.430311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.430346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.430535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.430567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.430853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.430885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.430999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.431032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.431205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.431256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.431439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.431471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.431598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.431632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.431808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.431841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.432033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.432066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.432314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.432349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.432463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.432496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.432600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.432633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.432832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.432866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.433104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.433137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.433320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.433356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.433560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.433593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.433769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.433803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.434020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.434054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.434299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.434334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.434538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.434579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.434691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.434724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.434917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.434951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.435149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.435182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.435379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.435414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.435590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.435623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.435797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.435829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.436039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.436071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.436253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.436288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.436479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.436512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.436630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.436662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.436835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.436868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.437072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.437105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.437298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.437332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.731 qpair failed and we were unable to recover it. 00:29:18.731 [2024-12-10 14:31:19.437523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.731 [2024-12-10 14:31:19.437556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.437732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.437766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.438007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.438039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.438245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.438281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.438522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.438555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.438741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.438774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.438975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.439007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.439261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.439296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.439488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.439521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.439700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.439733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.439909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.439942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.440067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.440100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.440365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.440401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.440523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.440556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.440760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.440792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.440906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.440939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.441206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.441248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.441362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.441396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.441514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.441546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.441775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.441807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.442024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.442057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.442333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.442366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.442481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.442515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.442729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.442761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.442878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.442911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.443083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.443115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.443333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.443372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.443612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.443645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.443828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.443860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.443981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.444012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.444197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.444240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.444417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.444450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.444629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.444661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.444840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.444873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.445007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.445041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.445146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.445180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.445427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.445460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.445665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.445697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.445827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.445860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.732 [2024-12-10 14:31:19.445970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.732 [2024-12-10 14:31:19.446002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.732 qpair failed and we were unable to recover it. 00:29:18.733 [2024-12-10 14:31:19.446141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.733 [2024-12-10 14:31:19.446174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.733 qpair failed and we were unable to recover it. 00:29:18.733 [2024-12-10 14:31:19.446436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.733 [2024-12-10 14:31:19.446471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.733 qpair failed and we were unable to recover it. 00:29:18.733 [2024-12-10 14:31:19.446644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.733 [2024-12-10 14:31:19.446677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.733 qpair failed and we were unable to recover it. 00:29:18.733 [2024-12-10 14:31:19.446810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.733 [2024-12-10 14:31:19.446843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.733 qpair failed and we were unable to recover it. 00:29:18.733 [2024-12-10 14:31:19.447015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.733 [2024-12-10 14:31:19.447048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.733 qpair failed and we were unable to recover it. 00:29:18.733 [2024-12-10 14:31:19.447246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.733 [2024-12-10 14:31:19.447281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:18.733 qpair failed and we were unable to recover it. 00:29:19.012 [2024-12-10 14:31:19.447575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.012 [2024-12-10 14:31:19.447609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.012 qpair failed and we were unable to recover it. 00:29:19.012 [2024-12-10 14:31:19.447865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.012 [2024-12-10 14:31:19.447899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.012 qpair failed and we were unable to recover it. 00:29:19.012 [2024-12-10 14:31:19.448113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.012 [2024-12-10 14:31:19.448145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.012 qpair failed and we were unable to recover it. 00:29:19.012 [2024-12-10 14:31:19.448263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.012 [2024-12-10 14:31:19.448297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.012 qpair failed and we were unable to recover it. 00:29:19.012 [2024-12-10 14:31:19.448486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.012 [2024-12-10 14:31:19.448519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.012 qpair failed and we were unable to recover it. 00:29:19.012 [2024-12-10 14:31:19.448711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.012 [2024-12-10 14:31:19.448743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.012 qpair failed and we were unable to recover it. 00:29:19.012 [2024-12-10 14:31:19.448853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.012 [2024-12-10 14:31:19.448885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.012 qpair failed and we were unable to recover it. 00:29:19.012 [2024-12-10 14:31:19.449156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.012 [2024-12-10 14:31:19.449188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.012 qpair failed and we were unable to recover it. 00:29:19.012 [2024-12-10 14:31:19.449330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.012 [2024-12-10 14:31:19.449362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.012 qpair failed and we were unable to recover it. 00:29:19.012 [2024-12-10 14:31:19.449637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.012 [2024-12-10 14:31:19.449669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.012 qpair failed and we were unable to recover it. 00:29:19.012 [2024-12-10 14:31:19.449844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.012 [2024-12-10 14:31:19.449876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.012 qpair failed and we were unable to recover it. 00:29:19.012 [2024-12-10 14:31:19.450114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.012 [2024-12-10 14:31:19.450147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.012 qpair failed and we were unable to recover it. 00:29:19.012 [2024-12-10 14:31:19.450345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.012 [2024-12-10 14:31:19.450379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.012 qpair failed and we were unable to recover it. 00:29:19.012 [2024-12-10 14:31:19.450612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.012 [2024-12-10 14:31:19.450644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.012 qpair failed and we were unable to recover it. 00:29:19.012 [2024-12-10 14:31:19.450750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.012 [2024-12-10 14:31:19.450782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.012 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.450958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.450990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.451278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.451313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.451517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.451550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.451790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.451823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.451948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.451980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.452108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.452147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.452275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.452309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.452575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.452607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.452821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.452854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.453046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.453080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.453190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.453231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.453412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.453444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.453579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.453612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.453793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.453825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.454004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.454037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.454155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.454187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.454475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.454509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.454698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.454732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.454840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.454872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.454997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.455030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.455205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.455251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.455431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.455464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.455666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.455699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.455936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.455969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.456154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.456186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.456454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.013 [2024-12-10 14:31:19.456488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.013 qpair failed and we were unable to recover it. 00:29:19.013 [2024-12-10 14:31:19.456701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.456734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.456941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.456973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.457078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.457110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.457306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.457342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.457546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.457579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.457702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.457734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.457931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.457965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.458151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.458184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.458459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.458492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.458681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.458713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.458888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.458920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.459096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.459129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.459254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.459287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.459485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.459519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.459648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.459681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.459917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.459950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.460075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.460108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.460322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.460357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.460542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.460574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.460766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.460805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.460990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.461023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.461308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.461342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.461606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.461639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.461813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.461847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.462064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.014 [2024-12-10 14:31:19.462096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.014 qpair failed and we were unable to recover it. 00:29:19.014 [2024-12-10 14:31:19.462282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.462317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.462510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.462542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.462752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.462785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.462976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.463007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.463179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.463212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.463353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.463387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.463583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.463616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.463790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.463824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.464009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.464042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.464286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.464320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.464565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.464599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.464786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.464819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.465032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.465065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.465259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.465293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.465429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.465461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.465565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.465598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.465793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.465825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.465965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.465998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.466233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.466267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.466443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.466477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.466596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.466628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.466771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.466805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.466941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.466974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.467145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.467177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.467315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.467350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.015 qpair failed and we were unable to recover it. 00:29:19.015 [2024-12-10 14:31:19.467534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.015 [2024-12-10 14:31:19.467566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.467743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.467776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.467890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.467923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.468177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.468210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.468402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.468435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.468675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.468709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.468892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.468924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.469125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.469158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.469282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.469316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.469442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.469481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.469706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.469739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.469925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.469958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.470060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.470093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.470355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.470390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.470604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.470636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.470776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.470810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.470947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.470979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.471104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.471138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.471343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.471379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.471502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.471534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.471651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.471683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.471860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.471893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.471998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.472029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.472215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.472256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.472499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.472531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.472708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.472740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.472860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.016 [2024-12-10 14:31:19.472893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.016 qpair failed and we were unable to recover it. 00:29:19.016 [2024-12-10 14:31:19.473024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.473058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.473296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.473329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.473521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.473554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.473741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.473773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.473965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.473997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.474116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.474149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.474355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.474389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.474496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.474529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.474650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.474683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.474892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.474925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.475066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.475098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.475287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.475321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.475592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.475625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.475734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.475767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.475874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.475907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.476027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.476060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.476252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.476286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.476454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.476486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.476672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.476704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.476812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.476845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.477132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.477164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.477293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.477326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.477501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.477539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.477717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.477750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.477968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.478000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.478117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.478150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.478337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.017 [2024-12-10 14:31:19.478372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.017 qpair failed and we were unable to recover it. 00:29:19.017 [2024-12-10 14:31:19.478551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.478583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.478759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.478792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.478919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.478952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.479071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.479104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.479351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.479386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.479653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.479685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.479812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.479845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.479969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.480003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.480122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.480154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.480356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.480391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.480579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.480613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.480745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.480777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.480949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.480981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.481178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.481210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.481322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.481356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.481621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.481653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.481829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.481861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.482073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.482105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.482244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.482278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.482395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.482427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.482613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.482645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.482765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.482798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.482986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.483019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.018 [2024-12-10 14:31:19.483252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.018 [2024-12-10 14:31:19.483287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.018 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.483557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.483589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.483836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.483868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.484057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.484090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.484214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.484256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.484394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.484427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.484667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.484700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.484872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.484906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.485032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.485064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.485269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.485303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.485419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.485453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.485639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.485671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.485860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.485898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.486163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.486197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.486332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.486365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.486480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.486512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.486632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.486665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.486790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.486823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.486939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.486973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.487163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.487196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.487468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.487503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.487691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.487724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.487852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.487885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.488009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.488041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.488295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.488330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.488441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.488471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.488741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.019 [2024-12-10 14:31:19.488774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.019 qpair failed and we were unable to recover it. 00:29:19.019 [2024-12-10 14:31:19.488964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.488997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.489103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.489136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.489288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.489323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.489489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.489523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.489740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.489772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.489951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.489983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.490122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.490156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.490290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.490323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.490431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.490463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.490650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.490683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.490923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.490956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.491080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.491113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.491379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.491414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.491621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.491654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.491838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.491872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.492063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.492096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.492231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.492266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.492445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.492478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.492652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.492685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.492965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.492998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.493199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.493241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.493372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.493405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.493517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.493548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.493738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.493771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.493957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.493989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.494162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.494201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.494388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.494422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.494610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.020 [2024-12-10 14:31:19.494643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.020 qpair failed and we were unable to recover it. 00:29:19.020 [2024-12-10 14:31:19.494764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.494797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.494976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.495009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.495252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.495288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.495410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.495444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.495551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.495583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.495847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.495881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.496140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.496172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.496442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.496476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.496615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.496648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.496831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.496864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.497127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.497161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.497388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.497422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.497634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.497667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.497821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.497854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.498092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.498123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.498323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.498358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.498464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.498494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.498668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.498700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.498980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.499012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.499183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.499233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.499445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.499478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.499614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.499646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.499781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.499814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.499997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.500030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.500140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.500171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.500299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.500334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.500527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.500560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.500818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.021 [2024-12-10 14:31:19.500851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.021 qpair failed and we were unable to recover it. 00:29:19.021 [2024-12-10 14:31:19.500975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.501007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.501188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.501229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.501406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.501439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.501635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.501667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.501850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.501882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.502090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.502123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.502256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.502290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.502490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.502524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.502789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.502822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.502941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.502979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.503243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.503277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.503525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.503558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.503747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.503780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.503971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.504004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.504205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.504248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.504355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.504385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.504577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.504610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.504871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.504904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.505046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.505078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.505277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.505310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.505443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.505477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.505646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.505678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.505862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.505895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.506141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.506175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.506324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.506357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.506481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.506515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.506705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.506739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.022 [2024-12-10 14:31:19.506870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.022 [2024-12-10 14:31:19.506902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.022 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.507070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.507103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.507298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.507332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.507507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.507539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.507722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.507754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.507949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.507982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.508094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.508126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.508326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.508361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.508474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.508508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.508844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.508928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.509152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.509188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.509416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.509450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.509709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.509742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.509915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.509948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.510164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.510197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.510407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.510440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.510636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.510669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.510947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.510979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.511234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.511268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.511380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.511411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.511610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.511643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.511833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.511865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.512058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.512099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.512231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.512266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.512484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.512516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.512698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.512731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.512911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.512944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.513133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.513166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.023 qpair failed and we were unable to recover it. 00:29:19.023 [2024-12-10 14:31:19.513355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.023 [2024-12-10 14:31:19.513388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.513509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.513543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.513810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.513842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.514039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.514071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.514180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.514212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.514412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.514445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.514710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.514742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.514917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.514949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.515140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.515173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.515375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.515408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.515601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.515633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.515884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.515917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.516124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.516156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.516290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.516323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.516502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.516535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.516724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.516757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.517031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.517063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.517179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.517211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.517344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.517377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.517549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.517580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.517844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.517876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.518146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.518180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.518311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.518345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.518478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.518511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.518648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.518681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.518815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.024 [2024-12-10 14:31:19.518846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.024 qpair failed and we were unable to recover it. 00:29:19.024 [2024-12-10 14:31:19.519040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.519073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.519277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.519311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.519503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.519537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.519658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.519691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.519799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.519830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.520087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.520120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.520295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.520328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.520462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.520495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.520633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.520671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.520942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.520973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.521256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.521291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.521530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.521563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.521749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.521781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.521961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.521993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.522110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.522142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.522352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.522387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.522571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.522603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.522774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.522808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.522977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.523010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.523205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.523260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.523471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.523504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.523626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.025 [2024-12-10 14:31:19.523658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.025 qpair failed and we were unable to recover it. 00:29:19.025 [2024-12-10 14:31:19.523903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.523935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.524125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.524157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.524443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.524478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.524736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.524768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.524889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.524921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.525033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.525066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.525308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.525343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.525608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.525640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.525854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.525887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.526139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.526171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.526317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.526350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.526540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.526573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.526845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.526877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.527059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.527093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.527210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.527253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.527436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.527469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.527651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.527684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.527857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.527890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.528009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.528042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.528250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.528282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.528473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.528506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.528630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.528662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.528840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.528872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.528994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.529027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.529155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.529187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.529476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.529560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.529849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.529886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.026 [2024-12-10 14:31:19.530094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.026 [2024-12-10 14:31:19.530128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.026 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.530324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.530359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.530554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.530586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.530781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.530814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.531090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.531121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.531241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.531276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.531480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.531513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.531618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.531651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.531764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.531797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.531985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.532018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.532229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.532264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.532405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.532438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.532625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.532658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.532852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.532884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.533023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.533055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.533300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.533334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.533452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.533485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.533786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.533819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.533920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.533952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.534124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.534156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.534341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.534373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.534564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.534597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.534737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.534770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.534960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.534993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.535256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.535291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.535413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.535445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.535575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.027 [2024-12-10 14:31:19.535614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.027 qpair failed and we were unable to recover it. 00:29:19.027 [2024-12-10 14:31:19.535727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.535759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.535966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.535999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.536242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.536276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.536487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.536519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.536783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.536816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.537007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.537039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.537233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.537267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.537459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.537491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.537602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.537632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.537808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.537840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.538034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.538067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.538204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.538247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.538424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.538456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.538705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.538738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.538877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.538910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.539015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.539047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.539284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.539319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.539461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.539494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.539734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.539767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.539961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.539992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.540170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.540202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.540339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.540373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.540616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.540649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.540830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.540863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.541129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.541162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.541288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.541321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.541576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.541609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.541855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.541888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.542017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.542049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.542167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.542199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.542382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.542416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.542606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.542638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.542760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.542792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.542915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.028 [2024-12-10 14:31:19.542947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.028 qpair failed and we were unable to recover it. 00:29:19.028 [2024-12-10 14:31:19.543190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.543245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.543423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.543455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.543628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.543661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.543862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.543894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.544000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.544033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.544170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.544210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.544421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.544453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.544645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.544677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.544865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.544898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.545068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.545100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.545233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.545268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.545460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.545493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.545608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.545640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.545753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.545784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.546026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.546058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.546304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.546339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.546464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.546496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.546618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.546649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.546856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.546889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.547134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.547167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.547426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.547460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.547676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.547709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.547947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.547981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.548157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.548190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.548460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.548494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.548616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.548648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.548782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.548815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.548998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.549030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.549158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.549189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.549449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.549482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.549671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.549703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.549898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.549930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.550202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.550246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.550416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.550448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.550559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.550589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.550801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.550833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.551029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.551062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.551326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.551360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.029 [2024-12-10 14:31:19.551481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.029 [2024-12-10 14:31:19.551512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.029 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.551701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.551735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.551928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.551961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.552142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.552175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.552388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.552423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.552565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.552597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.552775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.552807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.552990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.553029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.553288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.553322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.553615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.553646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.553762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.553794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.553910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.553943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.554119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.554151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.554270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.554303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.554496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.554529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.554781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.554813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.554997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.555030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.555157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.555189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.555438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.555470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.555736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.555769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.555902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.555935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.556118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.556151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.556270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.556305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.556483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.556515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.556627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.556659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.556844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.556876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.557113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.557145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.557323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.557356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.557554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.557587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.557691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.557724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.557972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.558005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.558203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.558245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.558381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.558413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.558607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.558640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.030 qpair failed and we were unable to recover it. 00:29:19.030 [2024-12-10 14:31:19.558848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.030 [2024-12-10 14:31:19.558881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.559116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.559148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.559352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.559386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.559650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.559683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.559859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.559891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.560028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.560060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.560243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.560276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.560456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.560488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.560731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.560763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.560949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.560983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.561163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.561196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.561385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.561417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.561593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.561626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.561767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.561805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.561997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.562030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.562297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.562331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.562571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.562603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.562781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.562815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.562999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.563031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.563215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.563256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.563376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.563409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.563586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.563618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.563869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.563901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.564030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.564062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.564274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.564310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.564433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.564466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.564578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.564610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.564728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.564762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.564937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.564969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.565143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.565175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.565368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.565403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.565644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.565676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.565786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.565818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.565990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.566021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.566286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.566320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.566459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.566491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.566669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.566702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.566939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.566972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.567173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.567206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.031 qpair failed and we were unable to recover it. 00:29:19.031 [2024-12-10 14:31:19.567456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.031 [2024-12-10 14:31:19.567489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.567745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.567777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.567975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.568008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.568264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.568300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.568417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.568449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.568714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.568748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.568954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.568986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.569176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.569208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.569360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.569393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.569662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.569695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.569804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.569836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.570015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.570047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.570229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.570264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.570396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.570429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.570638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.570676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.570869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.570903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.571089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.571121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.571238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.571270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.571392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.571424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.571559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.571590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.571730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.571762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.571886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.571919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.572117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.572149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.572439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.572472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.572595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.572627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.572735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.572766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.572875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.572908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.573030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.573062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.573253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.573287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.573528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.573560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.573755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.573788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.573921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.573953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.574146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.574178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.574385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.574420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.574564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.574597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.574744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.574777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.575022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.575054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.575262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.575297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.575423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.032 [2024-12-10 14:31:19.575455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.032 qpair failed and we were unable to recover it. 00:29:19.032 [2024-12-10 14:31:19.575586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.575619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.575812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.575846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.576038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.576071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.576197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.576240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.576416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.576449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.576665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.576697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.576880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.576913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.577093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.577126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.577322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.577357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.577481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.577514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.577638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.577670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.577851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.577883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.578090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.578122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.578337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.578371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.578633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.578665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.578797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.578838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.578950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.578982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.579094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.579127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.579299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.579332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.579510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.579542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.579719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.579752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.579871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.579905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.580018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.580051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.580163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.580196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.580401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.580434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.580617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.580650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.582537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.582595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.582932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.582967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.583187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.583256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.583469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.583500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.583628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.583660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.583787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.583817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.584063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.584093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.584205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.584249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.584439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.584469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.584600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.584630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.584866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.584896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.585012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.585042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.585288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.033 [2024-12-10 14:31:19.585320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.033 qpair failed and we were unable to recover it. 00:29:19.033 [2024-12-10 14:31:19.585423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.585454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.585712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.585742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.585855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.585885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.586124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.586155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.586290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.586321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.586446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.586477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.586590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.586619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.586791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.586822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.587061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.587091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.587253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.587284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.587390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.587420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.587601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.587631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.587736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.587763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.587860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.587890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.588009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.588039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.588171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.588201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.588324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.588359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.588619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.588649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.588749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.588780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.588897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.588927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.589063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.589095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.589213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.589253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.589434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.589464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.589702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.589733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.589916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.589947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.590085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.590114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.590299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.590331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.590445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.590475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.590586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.590617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.590791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.590821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.590927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.590956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.591119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.591149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.591282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.591314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.591440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.591470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.591663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.591694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.591802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.591831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.591948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.591977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.592094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.592123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.592313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.034 [2024-12-10 14:31:19.592343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.034 qpair failed and we were unable to recover it. 00:29:19.034 [2024-12-10 14:31:19.592543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.592573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.592695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.592725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.592844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.592874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.592982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.593010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.593118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.593148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.593322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.593358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.593472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.593501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.593711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.593744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.593874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.593906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.594151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.594184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.594310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.594343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.594463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.594496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.594696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.594729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.594923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.594956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.595081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.595113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.595336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.595371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.595573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.595606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.595872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.595911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.596033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.596066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.596247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.596282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.596415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.596446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.596625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.596657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.596929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.596962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.597205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.597247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.597436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.597469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.597585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.597618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.597794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.597826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.598087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.598121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.598269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.598304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.598427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.598460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.598722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.598755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.598946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.598979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.599111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.599144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.599339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.035 [2024-12-10 14:31:19.599372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.035 qpair failed and we were unable to recover it. 00:29:19.035 [2024-12-10 14:31:19.599492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.599524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.599664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.599695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.599878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.599907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.600182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.600213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.600507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.600536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.600674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.600704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.600889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.600918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.601126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.601157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.601363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.601394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.601570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.601600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.601731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.601761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.601881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.601911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.602096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.602126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.602304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.602335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.602510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.602540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.602668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.602697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.602893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.602923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.603050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.603081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.603264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.603297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.603415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.603446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.603566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.603597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.603856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.603887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.604008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.604040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.604147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.604185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.604379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.604411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.604677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.604709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.604999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.605031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.605276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.605308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.605495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.605529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.605661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.605695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.605913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.605946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.606130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.606163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.606276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.606310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.606431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.606464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.606650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.606683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.606804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.606838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.606946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.606979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.607107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.607141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.607251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.036 [2024-12-10 14:31:19.607285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.036 qpair failed and we were unable to recover it. 00:29:19.036 [2024-12-10 14:31:19.607480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.607513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.607632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.607664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.607779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.607811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.607938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.607970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.608143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.608176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.608367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.608399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.608501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.608534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.608666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.608699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.608817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.608849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.609024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.609060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.609179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.609211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.609404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.609437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.609574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.609608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.609725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.609758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.609871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.609906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.610013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.610046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.610244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.610278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.610471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.610503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.610681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.610713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.610833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.610866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.610971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.611003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.611128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.611160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.611363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.611398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.611578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.611611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.611786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.611825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.611965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.611997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.612129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.612162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.612390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.612424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.612534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.612566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.612675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.612707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.612813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.612847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.612989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.613021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.613144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.613178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.613404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.613438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.613677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.613711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.613848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.613880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.614011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.614043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.614178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.614211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.037 [2024-12-10 14:31:19.614340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.037 [2024-12-10 14:31:19.614374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.037 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.614573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.614606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.614729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.614763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.614884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.614916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.615142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.615180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.615310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.615344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.615473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.615507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.615679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.615711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.615850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.615883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.615998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.616031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.616208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.616253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.616501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.616536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.616645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.616678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.616831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.616905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.617169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.617253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.617485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.617521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.617649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.617685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.617812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.617845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.618026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.618059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.618263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.618298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.618430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.618464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.618599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.618631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.618827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.618861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.619068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.619103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.619241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.619275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.619416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.619449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.619565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.619597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.619793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.619828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.620006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.620039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.620147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.620180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.620312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.620346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.620489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.620522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.620629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.620662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.620858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.620891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.621005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.621038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.621215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.621257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.621396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.621429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.621606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.621638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.621906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.621938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.622061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.038 [2024-12-10 14:31:19.622093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.038 qpair failed and we were unable to recover it. 00:29:19.038 [2024-12-10 14:31:19.622211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.622264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.622377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.622410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.622528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.622560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.622683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.622716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.622919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.622951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.623141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.623174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.623303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.623337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.623574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.623606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.623720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.623752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.623935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.623968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.624104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.624137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.624255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.624291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.624534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.624567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.624832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.624865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.624989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.625023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.625128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.625162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.625357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.625391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.625510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.625543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.625688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.625720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.625903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.625938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.626048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.626081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.626283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.626317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.626435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.626469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.626592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.626625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.626801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.626834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.627003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.627036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.627145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.627178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.627361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.627400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.627589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.627622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.627762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.627795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.627904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.627937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.628130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.628163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.628292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.628327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.628454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.628486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.628671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.039 [2024-12-10 14:31:19.628704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.039 qpair failed and we were unable to recover it. 00:29:19.039 [2024-12-10 14:31:19.628816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.628851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.628952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.628985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.629159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.629192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.629405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.629438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.629564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.629597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.629720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.629753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.629966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.629999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.630130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.630167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.630350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.630384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.630623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.630656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.630778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.630811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.630996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.631029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.631136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.631168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.631435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.631469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.631642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.631675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.631846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.631878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.631989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.632035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.632282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.632316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.632498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.632531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.632704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.632747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.632870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.632900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.633016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.633047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.633308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.633343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.633517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.633549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.633792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.633825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.634088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.634121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.634297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.634330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.634460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.634492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.634599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.634632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.634878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.634909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.635029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.635061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.635259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.635294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.635425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.635458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.635685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.635755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.635889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.635925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.636168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.636205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.636339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.636374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.636551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.636584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.636715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.040 [2024-12-10 14:31:19.636748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.040 qpair failed and we were unable to recover it. 00:29:19.040 [2024-12-10 14:31:19.637008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.637048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.637242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.637291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.637485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.637523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.637649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.637681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.637868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.637902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.638147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.638182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.638318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.638353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.638544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.638587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.638774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.638807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.639051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.639086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.639203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.639245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.639427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.639460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.639580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.639618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.639810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.639846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.639971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.640013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.640139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.640170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.640332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.640369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.640565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.640609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.640740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.640773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.640996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.641029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.641156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.641189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.641329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.641363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.641477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.641510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.641683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.641717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.641894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.641927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.642109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.642141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.642268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.642304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.642495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.642528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.642636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.642669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.642863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.642897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.643019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.643052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.643172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.643205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.643325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.643359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.643477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.643510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.643658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.643696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.643822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.643854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.643991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.644024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.644199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.644247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.644448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.041 [2024-12-10 14:31:19.644481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.041 qpair failed and we were unable to recover it. 00:29:19.041 [2024-12-10 14:31:19.644617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.644649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.644782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.644815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.644925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.644958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.645080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.645112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.645284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.645318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.645574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.645607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.645738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.645771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.645885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.645917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.646057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.646090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.646296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.646329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.646462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.646494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.646665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.646698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.646817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.646850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.647035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.647067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.647183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.647216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.647360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.647392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.647576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.647608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.647750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.647784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.647979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.648011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.648190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.648231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.648351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.648383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.648559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.648593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.648793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.648832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.648956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.648988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.649167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.649199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.649346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.649378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.649509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.649542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.649672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.649706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.649824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.649857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.650091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.650124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.650306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.650341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.650459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.650492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.650662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.650694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.650865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.650897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.651001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.651034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.651213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.651254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.651469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.651501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.651626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.651659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.651797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.651829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.042 qpair failed and we were unable to recover it. 00:29:19.042 [2024-12-10 14:31:19.651952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.042 [2024-12-10 14:31:19.651984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.652104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.652137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.652261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.652294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.652433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.652467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.652590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.652623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.652801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.652833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.653016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.653048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.653165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.653197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.653338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.653371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.653486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.653518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.653643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.653682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.653864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.653897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.654007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.654040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.654149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.654182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.654370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.654404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.654509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.654541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.654666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.654700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.654877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.654909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.655025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.655058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.655259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.655294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.655408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.655440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.655556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.655589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.655774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.655806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.655922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.655955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.656074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.656108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.656356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.656389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.656510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.656543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.656662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.656694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.656805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.656838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.656946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.656978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.657097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.657129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.657272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.657306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.657439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.657471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.657582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.657614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.657739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.657771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.657893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.657925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.658116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.658148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.658323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.658362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.658482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.658516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.043 qpair failed and we were unable to recover it. 00:29:19.043 [2024-12-10 14:31:19.658708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.043 [2024-12-10 14:31:19.658740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.658865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.658898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.659009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.659042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.659149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.659180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.659297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.659331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.659435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.659468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.659642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.659675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.659809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.659842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.659953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.659986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.660209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.660252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.660426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.660458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.660584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.660616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.660795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.660828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.660945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.660976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.661083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.661115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.661361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.661394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.661573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.661606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.661718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.661750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.661945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.661977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.662159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.662191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.662386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.662419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.662601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.662633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.662744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.662777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.662881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.662914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.663017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.663049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.663289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.663323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.663448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.663480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.663619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.663651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.663864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.663896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.664007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.664039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.664298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.664331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.664553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.664587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.664762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.664794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.664969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.665001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.044 [2024-12-10 14:31:19.665177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.044 [2024-12-10 14:31:19.665209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.044 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.665339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.665371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.665484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.665517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.665616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.665649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.665819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.665852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.665989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.666039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.666170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.666205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.666464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.666498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.666614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.666647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.666823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.666857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.667064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.667097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.667233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.667268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.667510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.667545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.667684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.667718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.667907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.667940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.668072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.668106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.668244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.668279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.668404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.668438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.668562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.668605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.668735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.668769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.668942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.668976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.669185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.669230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.669367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.669401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.669517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.669550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.669666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.669699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.669830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.669865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.669991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.670025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.670128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.670161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.670285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.670320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.670445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.670479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.670611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.670644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.670823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.670857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.671041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.671075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.671187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.671228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.671335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.671367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.671618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.671652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.671827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.671860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.672046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.672079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.672197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.672243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.045 [2024-12-10 14:31:19.672433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.045 [2024-12-10 14:31:19.672466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.045 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.672648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.672681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.672942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.672976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.673159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.673193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.673342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.673376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.673496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.673529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.673661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.673699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.673817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.673850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.674028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.674060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.674180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.674212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.674439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.674472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.674589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.674621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.674734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.674767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.674939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.674972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.675149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.675181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.675373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.675406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.675533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.675565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.675671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.675703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.675878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.675911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.676020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.676051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.676171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.676204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.676329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.676361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.676488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.676520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.676630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.676662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.676776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.676808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.676982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.677015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.677129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.677162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.677325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.677359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.677546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.677578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.677703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.677735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.677858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.677890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.678028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.678061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.678177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.678209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.678339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.678382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.678564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.678596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.678718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.678751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.678880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.678912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.679121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.679154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.679275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.679309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.679419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.046 [2024-12-10 14:31:19.679451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.046 qpair failed and we were unable to recover it. 00:29:19.046 [2024-12-10 14:31:19.679557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.679591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.679780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.679813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.679922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.679954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.680144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.680177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.680363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.680397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.680528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.680561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.680803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.680835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.680961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.680993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.681120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.681152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.681342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.681376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.681488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.681520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.681647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.681680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.681870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.681902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.682015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.682047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.682253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.682286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.682391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.682424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.682741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.682774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.682875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.682908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.683099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.683131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.683252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.683285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.683395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.683432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.683612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.683645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.683759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.683791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.683983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.684016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.684134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.684167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.684352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.684385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.684493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.684525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.684661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.684693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.684822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.684854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.685043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.685075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.685255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.685288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.685392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.685424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.685528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.685560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.685678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.685711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.685837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.685870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.686002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.686034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.686136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.686169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.686313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.686346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.686454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.686487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.047 qpair failed and we were unable to recover it. 00:29:19.047 [2024-12-10 14:31:19.686673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.047 [2024-12-10 14:31:19.686706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.686925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.686956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.687147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.687180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.687324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.687358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.687473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.687507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.687609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.687641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.687756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.687789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.687902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.687934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.688116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.688147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.688271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.688305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.688489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.688522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.688698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.688730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.688905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.688937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.689136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.689169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.689284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.689318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.689516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.689548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.689654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.689686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.689874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.689907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.690017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.690048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.690158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.690190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.690309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.690344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.690527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.690560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.690702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.690745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.690875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.690909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.691089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.691122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.691242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.691277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.691397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.691431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.691614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.691647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.691851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.691884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.692063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.692097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.692238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.692272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.692390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.692422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.692687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.692721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.692827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.692861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.693054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.048 [2024-12-10 14:31:19.693087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.048 qpair failed and we were unable to recover it. 00:29:19.048 [2024-12-10 14:31:19.693229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.693276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.693466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.693498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.693607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.693640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.693769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.693802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.693913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.693946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.694126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.694160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.694271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.694305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.694477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.694510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.694627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.694661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.694785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.694816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.695017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.695051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.695166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.695199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.695358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.695392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.695572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.695605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.695724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.695757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.695885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.695918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.696096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.696130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.696323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.696358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.696553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.696586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.696692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.696725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.696849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.696883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.696989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.697022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.697176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.697209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.697403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.697437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.697563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.697596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.697712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.697745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.697929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.697962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.698070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.698106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.698310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.698343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.698468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.698500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.698670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.698701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.698878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.698911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.699116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.699160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.699297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.699332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.699514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.699547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.699657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.699689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.699798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.699829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.700094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.700127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.700368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.049 [2024-12-10 14:31:19.700401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.049 qpair failed and we were unable to recover it. 00:29:19.049 [2024-12-10 14:31:19.700509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.700541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.700658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.700689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.700821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.700853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.701024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.701056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.701173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.701204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.701326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.701360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.701474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.701506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.701748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.701780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.701978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.702011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.702139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.702170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.702370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.702403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.702581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.702613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.702804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.702837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.702960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.702993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.703273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.703306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.703430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.703469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.703582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.703613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.703735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.703767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.703959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.703992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.704163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.704196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.704328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.704361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.704546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.704578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.704718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.704751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.705020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.705052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.705261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.705295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.705478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.705511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.705649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.705682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.705872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.705905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.706012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.706045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.706171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.706202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.706391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.706424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.706557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.706590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.706717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.706749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.706857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.706889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.706996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.707029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.707129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.707162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.707377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.707410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.707520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.707552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.707815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.707847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.050 [2024-12-10 14:31:19.707972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.050 [2024-12-10 14:31:19.708004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.050 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.708179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.708212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.708413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.708445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.708564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.708602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.708708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.708740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.708932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.708964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.709079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.709111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.709306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.709339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.709527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.709560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.709741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.709774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.709887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.709919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.710023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.710055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.710185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.710241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.710351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.710383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.710559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.710591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.710709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.710742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.710916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.710947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.711143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.711176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.711359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.711392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.711633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.711665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.711785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.711816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.711987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.712019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.712122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.712154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.712362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.712395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.712505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.712537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.712708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.712740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.712868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.712900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.713022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.713055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.713254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.713287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.713461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.713494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.713606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.713643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.713859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.713892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.714080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.714112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.714288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.714320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.714516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.714549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.714728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.714761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.714867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.714898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.715041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.715074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.715198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.715239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.715362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.715394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.051 [2024-12-10 14:31:19.715563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.051 [2024-12-10 14:31:19.715595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.051 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.715724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.715756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.715874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.715905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.716103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.716136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.716336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.716371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.716627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.716659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.716869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.716902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.717111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.717144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.717257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.717293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.717419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.717451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.717572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.717604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.717784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.717816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.718006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.718039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.718215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.718257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.718364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.718397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.718677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.718709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.718877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.718911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.719089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.719121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.719234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.719268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.719405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.719437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.719620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.719652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.719914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.719945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.720119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.720151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.720379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.720413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.720534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.720566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.720688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.720720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.720828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.720861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.720989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.721020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.721152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.721184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.721308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.721340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.721449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.721480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.721672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.721711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.721893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.721926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.722039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.722071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.722186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.722230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.722472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.722505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.722706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.722738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.722866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.722898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.052 [2024-12-10 14:31:19.723173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.052 [2024-12-10 14:31:19.723206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.052 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.723487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.723520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.723640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.723672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.723788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.723820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.723942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.723974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.724107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.724139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.724334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.724368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.724566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.724598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.724722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.724754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.724875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.724907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.725016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.725048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.725174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.725205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.725368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.725400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.725573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.725606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.725808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.725840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.725964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.725996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.726100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.726131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.726262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.726295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.726470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.726503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.726623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.726656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.726765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.726803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.726912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.726944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.727186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.727226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.727403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.727437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.727544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.727576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.727695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.727727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.727833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.727866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.727970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.728001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.728176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.728208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.728405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.728437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.728638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.728670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.728932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.728964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.729145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.729177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.729370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.729403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.729582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.729615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.729737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.729769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.053 qpair failed and we were unable to recover it. 00:29:19.053 [2024-12-10 14:31:19.729942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.053 [2024-12-10 14:31:19.729975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-12-10 14:31:19.730175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.054 [2024-12-10 14:31:19.730207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-12-10 14:31:19.730429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.054 [2024-12-10 14:31:19.730462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-12-10 14:31:19.730635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.054 [2024-12-10 14:31:19.730666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-12-10 14:31:19.730771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.054 [2024-12-10 14:31:19.730803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-12-10 14:31:19.730987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.054 [2024-12-10 14:31:19.731019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.054 qpair failed and we were unable to recover it. 00:29:19.054 [2024-12-10 14:31:19.731196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.731238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.731436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.731468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.731647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.731681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.731885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.731917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.732093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.732126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.732313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.732352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.732536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.732567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.732736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.732768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.732888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.732921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.733028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.733059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.733262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.733296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.733467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.733498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.733637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.733669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.733872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.733905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.734167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.734199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.734406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.734438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.734542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.734575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.734759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.734791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.734982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.735013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.735196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.735238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.735445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.735477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.735612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.735644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.735749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.735780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.735967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.735999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.736266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.736299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.736445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.736478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.323 qpair failed and we were unable to recover it. 00:29:19.323 [2024-12-10 14:31:19.736613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.323 [2024-12-10 14:31:19.736645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.736856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.736888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.737027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.737059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.737242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.737275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.737562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.737595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.737787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.737820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.738010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.738043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.738229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.738264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.738529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.738561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.738808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.738841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.739033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.739065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.739256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.739288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.739557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.739589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.739778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.739811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.740057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.740088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.740268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.740301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.740495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.740527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.740718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.740750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.740861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.740893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.741063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.741094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.741270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.741304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.741481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.741512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.741720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.741753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.741940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.741973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.742103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.742135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.742268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.742302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.742479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.742510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.742686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.742720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.742892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.742925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.743052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.743085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.743215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.743258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.743365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.743397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.743586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.743617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.743789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.743821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.744005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.744039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.744158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.744190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.744378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.744412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.744526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.744559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.744826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.744858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.744979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.324 [2024-12-10 14:31:19.745012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.324 qpair failed and we were unable to recover it. 00:29:19.324 [2024-12-10 14:31:19.745133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.745166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.745376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.745409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.745599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.745631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.745773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.745805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.745933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.745964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.746084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.746116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.746332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.746366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.746481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.746519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.746696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.746729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.746996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.747028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.747226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.747260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.747392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.747424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.747615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.747647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.747851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.747883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.748060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.748093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.748274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.748307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.748483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.748516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.748723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.748755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.748874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.748907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.749171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.749203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.749416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.749449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.749589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.749622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.749863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.749895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.750161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.750193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.750380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.750413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.750686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.750719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.750920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.750951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.751123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.751157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.751379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.751412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.751606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.751637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.751763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.751796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.751975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.752008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.752277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.752310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.752503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.752535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.752716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.752755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.752997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.753029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.753166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.753199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.753320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.753353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.753599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.325 [2024-12-10 14:31:19.753630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.325 qpair failed and we were unable to recover it. 00:29:19.325 [2024-12-10 14:31:19.753760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.753792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.753901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.753934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.754145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.754178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.754386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.754421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.754623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.754655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.754836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.754868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.755073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.755105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.755209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.755252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.755445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.755478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.755620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.755652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.755904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.755937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.756200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.756244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.756533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.756565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.756804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.756837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.756978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.757010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.757201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.757256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.757519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.757551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.757729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.757762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.757869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.757899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.758106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.758136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.758352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.758383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.758602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.758631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.758869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.758904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.759078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.759107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.759356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.759385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.759627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.759656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.759864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.759893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.760023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.760052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.760233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.760263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.760541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.760570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.760751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.760780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.760951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.760981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.761162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.761193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.761406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.761438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.761574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.761604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.761824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.761855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.762067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.762098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.762291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.762323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.762508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.326 [2024-12-10 14:31:19.762538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.326 qpair failed and we were unable to recover it. 00:29:19.326 [2024-12-10 14:31:19.762741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.762771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.763055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.763085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.763333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.763364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.763538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.763569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.763749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.763780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.763979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.764010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.764251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.764283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.764490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.764523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.764637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.764670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.764794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.764826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.765026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.765059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.765310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.765343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.765482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.765514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.765631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.765663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.765931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.765963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.766106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.766137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.766259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.766292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.766575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.766607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.766799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.766831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.767008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.767041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.767214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.767261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.767367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.767400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.767657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.767688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.767903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.767935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.768109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.768152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.768400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.768433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.768639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.768671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.768795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.768827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.768938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.768971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.769160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.769192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.769442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.769474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.769594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.769627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.769813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.769845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.769962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.327 [2024-12-10 14:31:19.769994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.327 qpair failed and we were unable to recover it. 00:29:19.327 [2024-12-10 14:31:19.770098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.770130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.770396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.770429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.770620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.770653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.770846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.770878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.771136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.771169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.771421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.771454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.771649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.771682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.771892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.771923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.772124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.772157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.772350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.772383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.772585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.772616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.772788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.772820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.773010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.773042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.773180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.773212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.773336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.773369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.773580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.773613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.773787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.773820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.774063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.774101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.774313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.774345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.774524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.774563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.774702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.774735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.774905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.774937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.775179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.775211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.775498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.775532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.775700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.775733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.775962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.775994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.776116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.776148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.776344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.776378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.776556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.776587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.776855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.776887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.777125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.777157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.777287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.777321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.777446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.777478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.777665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.777697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.777890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.777923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.778045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.778078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.778200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.778243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.778374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.778406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.778620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.778653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.328 qpair failed and we were unable to recover it. 00:29:19.328 [2024-12-10 14:31:19.778830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.328 [2024-12-10 14:31:19.778862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.779058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.779090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.779290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.779324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.779595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.779629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.779804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.779837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.780080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.780119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.780318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.780351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.780636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.780669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.780841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.780873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.781086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.781118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.781307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.781340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.781605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.781639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.781744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.781776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.781978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.782010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.782188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.782228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.782404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.782437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.782677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.782709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.782971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.783004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.783186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.783239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.783377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.783410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.783591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.783623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.783756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.783789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.783925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.783957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.784139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.784171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.784357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.784389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.784642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.784676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.784876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.784907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.785126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.785159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.785366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.785400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.785631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.785664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.785839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.785872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.786059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.786091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.786227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.786261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.786440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.786472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.786584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.786617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.786882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.786914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.787089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.787122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.787299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.787333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.787507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.787540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.329 [2024-12-10 14:31:19.787786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.329 [2024-12-10 14:31:19.787818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.329 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.788004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.788037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.788213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.788265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.788400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.788434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.788615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.788646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.788908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.788941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.789112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.789145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.789352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b460 is same with the state(6) to be set 00:29:19.330 [2024-12-10 14:31:19.789650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.789723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.790013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.790049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.790256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.790293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.790404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.790438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.790652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.790684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.790935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.790967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.791080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.791113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.791316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.791350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.791619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.791651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.791859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.791893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.792012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.792044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.792234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.792272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.792447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.792480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.792598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.792631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.792757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.792789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.793053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.793085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.793308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.793341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.793524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.793556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.793797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.793829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.794050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.794083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.794356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.794390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.794578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.794610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.794782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.794820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.795062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.795094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.795285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.795317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.795436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.795469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.795680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.795718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.795894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.795926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.796106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.796139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.796325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.796359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.796565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.796597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.796778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.330 [2024-12-10 14:31:19.796811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.330 qpair failed and we were unable to recover it. 00:29:19.330 [2024-12-10 14:31:19.797055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.797088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.797303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.797336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.797523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.797555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.797683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.797717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.797911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.797942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.798063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.798096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.798290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.798323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.798531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.798563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.798872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.798905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.799075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.799107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.799238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.799272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.799396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.799428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.799668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.799701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.799829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.799861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.799977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.800010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.800134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.800165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.800363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.800396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.800588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.800620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.800820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.800852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.801120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.801152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.801341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.801375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.801573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.801606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.801714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.801747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.801985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.802017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.802149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.802180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.802322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.802354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.802529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.802560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.802766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.802798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.802982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.803014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.803211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.803253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.803442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.803473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.803661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.803694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.803879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.803912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.804101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.804133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.804328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.804368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.804545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.804578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.804831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.804871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.805064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.805096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.805245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.331 [2024-12-10 14:31:19.805279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.331 qpair failed and we were unable to recover it. 00:29:19.331 [2024-12-10 14:31:19.805411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.805444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.805662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.805694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.805876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.805908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.806082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.806114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.806328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.806361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.806627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.806659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.806776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.806809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.807022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.807054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.807325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.807360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.807553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.807585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.807807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.807845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.808064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.808096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.808373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.808406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.808598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.808630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.808818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.808850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.808970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.809000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.809188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.809228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.809519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.809551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.809727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.809759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.809949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.809980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.810174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.810206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.810400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.810433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.810627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.810660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.810850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.810882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.811152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.811185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.811392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.811426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.811608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.811639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.811770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.811803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.811977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.812010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.812193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.812236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.812415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.812447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.812622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.812655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.812777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.812809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.812932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.812964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.332 qpair failed and we were unable to recover it. 00:29:19.332 [2024-12-10 14:31:19.813071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.332 [2024-12-10 14:31:19.813103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.813293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.813332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.813593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.813625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.813744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.813776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.814010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.814042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.814215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.814257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.814525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.814558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.814743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.814775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.815037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.815069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.815281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.815314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.815505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.815536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.815657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.815690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.815881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.815913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.816043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.816075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.816259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.816292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.816418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.816450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.816582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.816615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.816793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.816824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.816967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.817000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.817182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.817215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.817335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.817365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.817508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.817540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.817649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.817679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.817858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.817890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.818105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.818137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.818272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.818306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.818486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.818518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.818707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.818739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.818881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.818914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.819118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.819150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.819348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.819381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.819553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.819585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.819709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.819741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.819950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.819983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.820164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.820195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.820319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.820351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.820529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.820561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.820851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.820883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.820992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.821023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.333 qpair failed and we were unable to recover it. 00:29:19.333 [2024-12-10 14:31:19.821142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.333 [2024-12-10 14:31:19.821174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.821305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.821338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.821580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.821618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.821809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.821841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.821956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.821986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.822196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.822239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.822372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.822403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.822509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.822541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.822782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.822814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.822999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.823031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.823155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.823187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.823321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.823353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.823524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.823555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.823728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.823760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.823894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.823928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.824116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.824148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.824304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.824338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.824525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.824557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.824728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.824760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.824944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.824977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.825103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.825136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.825264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.825299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.825442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.825474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.825690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.825721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.825995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.826028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.826230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.826264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.826475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.826508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.826700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.826732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.826868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.826900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.827171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.827204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.827400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.827432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.827674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.827705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.827831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.827864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.827992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.828024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.828158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.828191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.828411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.828443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.828683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.828714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.828902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.828934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.829058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.829090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.334 [2024-12-10 14:31:19.829280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.334 [2024-12-10 14:31:19.829315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.334 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.829561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.829595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.829717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.829750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.830026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.830067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.830244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.830283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.830414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.830458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.830731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.830762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.830945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.830976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.831238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.831271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.831449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.831481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.831594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.831624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.831745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.831776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.831970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.832001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.832262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.832295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.832511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.832543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.832664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.832695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.832837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.832869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.833066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.833098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.833290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.833322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.833431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.833462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.833650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.833683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.833855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.833886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.834008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.834038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.834246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.834279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.834526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.834557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.834736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.834768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.834909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.834940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.835120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.835153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.835344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.835376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.835564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.835595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.835753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.835823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.836052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.836087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.836236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.836272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.836404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.836436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.836675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.836707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.836848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.836880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.837061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.837091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.837206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.837254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.837525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.837557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.837801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.335 [2024-12-10 14:31:19.837832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.335 qpair failed and we were unable to recover it. 00:29:19.335 [2024-12-10 14:31:19.837955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.837986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.838118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.838150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.838363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.838396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.838642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.838673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.838808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.838841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.838964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.838995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.839143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.839174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.839363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.839402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.839601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.839632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.839824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.839855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.840062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.840093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.840234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.840267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.840417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.840449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.840639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.840672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.840780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.840812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.840991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.841022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.841229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.841261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.841445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.841483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.841611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.841642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.841833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.841864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.841980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.842011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.842181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.842212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.842426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.842457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.842596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.842627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.842736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.842766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.843014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.843046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.843183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.843214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.843405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.843437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.843614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.843646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.843889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.843919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.844030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.844062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.844190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.844232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.844363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.844394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.844583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.844612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.844723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.336 [2024-12-10 14:31:19.844752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.336 qpair failed and we were unable to recover it. 00:29:19.336 [2024-12-10 14:31:19.844953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.844983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.845098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.845128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.845251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.845283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.845407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.845436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.845544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.845573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.845758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.845788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.845914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.845943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.846079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.846109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.846287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.846318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.846511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.846546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.846727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.846757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.846861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.846891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.847018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.847046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.847169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.847198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.847384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.847417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.847556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.847585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.847718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.847748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.847858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.847888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.848000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.848029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.848207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.848246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.848369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.848399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.848508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.848537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.848711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.848739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.848925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.848954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.849172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.849201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.849328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.849358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.849532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.849563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.849670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.849699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.849880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.849910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.850043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.850073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.850207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.850257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.850390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.850420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.850597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.850626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.850751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.850780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.850972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.851002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.851129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.851158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.851352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.851389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.851514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.851543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.851646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.851675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.851788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.337 [2024-12-10 14:31:19.851817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.337 qpair failed and we were unable to recover it. 00:29:19.337 [2024-12-10 14:31:19.852027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.852056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.852239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.852270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.852385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.852414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.852592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.852622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.852729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.852758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.852895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.852925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.853042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.853072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.853197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.853235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.853358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.853387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.853512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.853542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.853702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.853771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.853961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.854024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.854241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.854277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.854394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.854426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.854609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.854640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.854761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.854792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.854989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.855020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.855133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.855165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.855353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.855385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.855533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.855566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.855692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.855734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.855933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.855965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.856084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.856115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.856294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.856339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.856469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.856502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.856691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.856723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.856910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.856942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.857134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.857166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.857303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.857335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.857458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.857489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.857609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.857640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.857764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.857795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.857972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.858004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.858116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.858148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.858331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.858364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.858479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.858511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.858627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.858658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.858860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.858890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.859189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.338 [2024-12-10 14:31:19.859226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.338 qpair failed and we were unable to recover it. 00:29:19.338 [2024-12-10 14:31:19.859423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.859455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.859569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.859600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.859721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.859752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.859865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.859896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.860002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.860036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.860150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.860181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.860325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.860358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.860530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.860562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.860691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.860723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.860910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.860940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.861179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.861209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.861384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.861452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.861581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.861615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.861734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.861765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.861895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.861927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.862047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.862079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.862267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.862300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.862517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.862549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.862680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.862712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.862829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.862861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.862974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.863009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.863193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.863231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.863409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.863440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.863566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.863598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.863711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.863748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.863922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.863952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.864075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.864107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.864246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.864278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.864521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.864553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.864794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.864829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.864945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.864977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.865147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.865178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.865365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.865397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.865525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.865556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.865801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.865832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.866021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.866053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.866235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.866268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.866478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.866510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.866639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.866670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.339 qpair failed and we were unable to recover it. 00:29:19.339 [2024-12-10 14:31:19.866777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.339 [2024-12-10 14:31:19.866808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.866983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.867014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.867128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.867159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.867288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.867321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.867470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.867502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.867625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.867657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.867774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.867807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.868101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.868171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.868397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.868434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.868573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.868604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.868776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.868807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.869003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.869034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.869232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.869266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.869383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.869416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.869616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.869647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.869760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.869791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.870059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.870091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.870215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.870257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.870387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.340 [2024-12-10 14:31:19.870418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.340 qpair failed and we were unable to recover it. 00:29:19.340 [2024-12-10 14:31:19.870594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.184849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.185184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.185262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.185478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.185511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.185756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.185787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.185976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.186006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.186251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.186284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.186493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.186533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.186680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.186711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.186916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.186948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.187082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.187116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.187352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.187386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.187571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.187604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.187712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.187745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.188037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.188070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.188174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.188207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.188340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.188374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.188550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.188582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.188758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.188791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.188997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.189029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.189207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.189250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.189455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.189490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.189636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.189670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.189794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.189835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.190083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.190116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.190304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.609 [2024-12-10 14:31:20.190339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.609 qpair failed and we were unable to recover it. 00:29:19.609 [2024-12-10 14:31:20.190558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.190591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.190850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.190883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.191158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.191192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.191344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.191378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.191598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.191632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.191761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.191794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.191931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.191963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.192209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.192249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.192390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.192423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.192664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.192697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.192816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.192849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.193089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.193122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.193248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.193280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.193390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.193422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.193544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.193576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.193764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.193796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.193986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.194018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.194214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.194257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.194517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.194549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.194693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.194727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.194938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.194972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.195177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.195215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.195335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.195368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.195612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.195645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.195843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.195876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.196068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.196100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.196243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.196283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.196412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.196445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.196663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.196695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.196837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.196870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.197116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.197149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.197353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.197387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.197638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.197671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.197799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.197832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.198007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.198040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.198248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.198284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.198484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.198517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.198710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.198742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.198950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.610 [2024-12-10 14:31:20.198983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.610 qpair failed and we were unable to recover it. 00:29:19.610 [2024-12-10 14:31:20.199136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.199169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.199463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.199497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.199689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.199722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.199842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.199874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.200058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.200090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.200286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.200321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.200501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.200533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.200660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.200692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.200872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.200905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.201100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.201134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.201275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.201315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.201518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.201551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.201727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.201759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.201951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.201984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.202160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.202192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.202378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.202411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.202548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.202580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.202774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.202806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.202940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.202973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.203212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.203254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.203379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.203412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.203522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.203553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.203794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.203831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.203942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.203973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.204198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.204240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.204367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.204400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.204603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.204635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.204794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.204827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.205066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.205100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.205240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.205276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.205487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.205519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.205716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.205749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.205881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.205913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.206038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.206071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.206291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.206325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.206533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.206567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.206694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.206724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.206856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.206888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.207069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.611 [2024-12-10 14:31:20.207101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.611 qpair failed and we were unable to recover it. 00:29:19.611 [2024-12-10 14:31:20.207364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.207399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.207528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.207560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.207762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.207794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.207911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.207943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.208121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.208153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.208342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.208375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.208586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.208618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.208741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.208773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.208893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.208925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.209123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.209155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.209396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.209436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.209623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.209656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.209896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.209928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.210114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.210147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.210283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.210317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.210569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.210602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.210881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.210914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.211108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.211141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.211319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.211354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.211622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.211655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.211884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.211916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.212184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.212227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.212495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.212527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.212807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.212840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.213116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.213149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.213499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.213533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.213797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.213831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.213959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.213991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.214124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.214156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.214377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.214412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.214594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.214633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.214820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.214853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.215116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.215148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.215337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.215373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.215635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.215667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.215910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.215943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.216136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.216168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.216367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.216402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.612 [2024-12-10 14:31:20.216650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.612 [2024-12-10 14:31:20.216682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.612 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.216895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.216927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.217096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.217128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.217329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.217363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.217648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.217681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.217943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.217975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.218166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.218199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.218354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.218388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.218591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.218624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.218820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.218853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.219127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.219160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.219412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.219446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.219664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.219703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.219893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.219925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.220043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.220076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.220342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.220378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.220593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.220625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.220881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.220914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.221168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.221201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.221402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.221435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.221693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.221726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.221938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.221971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.222263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.222298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.222499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.222532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.222724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.222757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.222946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.222979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.223238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.223274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.223523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.223558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.223731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.223763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.224050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.224084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.224361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.224396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.224674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.224707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.224851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.224884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.225077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.613 [2024-12-10 14:31:20.225110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.613 qpair failed and we were unable to recover it. 00:29:19.613 [2024-12-10 14:31:20.225367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.225402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.225595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.225628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.225897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.225930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.226194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.226235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.226494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.226527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.226726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.226759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.227000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.227034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.227289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.227324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.227553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.227586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.227852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.227885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.228172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.228206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.228329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.228363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.228545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.228578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.228841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.228873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.229050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.229084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.229276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.229311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.229423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.229457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.229755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.229788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.229969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.230013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.230243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.230277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.230469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.230503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.230687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.230719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.230958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.230991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.231165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.231198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.231453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.231488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.231727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.231759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.231964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.231997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.232173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.232206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.232399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.232432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.232691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.232723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.232877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.232911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.233177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.233210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.233419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.233452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.233743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.233775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.233889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.233922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.234103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.234135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.234320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.234356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.234621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.234655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.234894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.614 [2024-12-10 14:31:20.234927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.614 qpair failed and we were unable to recover it. 00:29:19.614 [2024-12-10 14:31:20.235266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.235301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.235567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.235601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.235805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.235839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.235988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.236021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.236312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.236347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.236622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.236655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.236863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.236896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.237139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.237173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.237366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.237400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.237671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.237705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.237969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.238002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.238210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.238255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.238456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.238489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.238624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.238656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.238772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.238803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.239001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.239034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.239239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.239274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.239450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.239484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.239690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.239726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.239992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.240033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.240332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.240368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.240567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.240600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.240844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.240877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.241056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.241088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.241312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.241346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.241535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.241569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.241836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.241868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.242133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.242170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.242391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.242425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.242603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.242636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.242925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.242959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.243236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.243270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.243569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.243603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.243835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.243868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.244062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.244095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.244338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.244372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.244642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.244676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.244868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.244901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.245081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.615 [2024-12-10 14:31:20.245113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.615 qpair failed and we were unable to recover it. 00:29:19.615 [2024-12-10 14:31:20.245291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.245325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.245524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.245557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.245746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.245780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.245914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.245947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.246190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.246231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.246478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.246511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.246754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.246787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.247013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.247047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.247297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.247332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.247520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.247553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.247793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.247827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.248069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.248102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.248341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.248375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.248569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.248603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.248802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.248836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.248964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.249000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.249193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.249234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.249524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.249558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.249750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.249782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.250070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.250104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.250459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.250500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.250681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.250713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.250997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.251030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.251171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.251204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.251464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.251498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.251763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.251796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.252105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.252139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.252401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.252436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.252611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.252645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.252787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.252822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.252939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.252971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.253227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.253261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.253458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.253492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.253735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.253768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.253974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.254008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.254135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.254170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.254465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.254499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.254639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.254675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.254868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.616 [2024-12-10 14:31:20.254902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.616 qpair failed and we were unable to recover it. 00:29:19.616 [2024-12-10 14:31:20.255241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.255277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.255465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.255498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.255682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.255715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.255937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.255971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.256231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.256265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.256424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.256461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.256644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.256678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.256950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.256983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.257174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.257208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.257445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.257479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.257621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.257654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.257841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.257874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.258011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.258044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.258336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.258371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.258519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.258552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.258696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.258729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.258956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.258990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.259173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.259207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.259406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.259440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.259687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.259719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.259991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.260025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.260317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.260358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.260493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.260526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.260725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.260759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.260956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.260990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.261110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.261143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.261274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.261310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.261442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.261476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.261593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.261627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.261822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.261855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.262049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.262082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.262238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.262273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.262486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.262520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.262761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.262794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.262939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.262973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.263261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.263297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.263433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.263467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.263668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.263702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.263832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.617 [2024-12-10 14:31:20.263865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.617 qpair failed and we were unable to recover it. 00:29:19.617 [2024-12-10 14:31:20.264082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.264116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.264382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.264417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.264676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.264709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.264930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.264964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.265207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.265252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.265442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.265475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.265741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.265775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.265909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.265942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.266156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.266190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.266464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.266498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.266641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.266674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.266820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.266853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.267130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.267162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.267438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.267473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.267686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.267719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.267940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.267973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.268163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.268197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.268448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.268482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.268672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.268705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.268975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.269008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.269256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.269292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.269428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.269462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.269651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.269690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.269877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.269911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.270102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.270135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.270333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.270368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.270550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.270584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.270760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.270792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.270906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.270939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.271136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.271169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.271433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.271469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.271665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.271698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.271968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.272002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.272245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.272279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.618 [2024-12-10 14:31:20.272482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.618 [2024-12-10 14:31:20.272516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.618 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.272713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.272747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.272902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.272935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.273168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.273201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.273420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.273455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.273702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.273736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.274045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.274078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.274295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.274330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.274535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.274569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.274814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.274847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.275050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.275082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.275337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.275373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.275599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.275632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.275907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.275941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.276234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.276270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.276648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.276728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.277022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.277058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.277337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.277373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.277619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.277653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.277941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.277975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.278152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.278187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.278399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.278433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.278546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.278579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.278797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.278830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.279026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.279058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.279242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.279276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.279485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.279518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.279650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.279683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.279838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.279880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.280127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.280161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.280361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.280396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.280665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.280698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.280908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.280942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.281073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.281106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.281375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.281409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.281601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.281634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.281813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.281846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.282111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.282144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.282331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.282366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.619 [2024-12-10 14:31:20.282539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.619 [2024-12-10 14:31:20.282572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.619 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.282785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.282819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.283086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.283119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.283338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.283373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.283501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.283535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.283723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.283756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.283962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.283995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.284190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.284230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.284502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.284535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.284811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.284844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.285036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.285086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.285341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.285375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.285579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.285613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.285734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.285767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.285968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.286001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.286246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.286281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.286486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.286521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.286704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.286737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.286945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.286978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.287235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.287270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.287380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.287413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.287681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.287714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.287986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.288018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.288214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.288271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.288579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.288613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.288812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.288845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.289050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.289084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.289211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.289256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.289383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.289416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.289687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.289727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.290009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.290042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.290333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.290368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.290637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.290670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.290947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.290980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.291176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.291209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.291418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.291450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.291716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.291749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.292070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.292103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.292285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.292319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.620 qpair failed and we were unable to recover it. 00:29:19.620 [2024-12-10 14:31:20.292564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.620 [2024-12-10 14:31:20.292596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.292774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.292816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.293018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.293052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.293265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.293299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.293577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.293611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.293740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.293774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.294046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.294078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.294278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.294313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.294508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.294542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.294675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.294708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.294933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.294967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.295177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.295210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.295412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.295446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.295647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.295680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.295792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.295826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.296080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.296113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.296475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.296511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.296704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.296783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.297046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.297083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.297337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.297374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.297676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.297711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.297894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.297927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.298152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.298185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.298471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.298508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.298719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.298752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.298969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.299003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.299154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.299188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.299482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.299517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.299658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.299692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.299986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.300019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.300227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.300263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.300485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.300519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.300795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.300829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.301110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.301143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.301418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.301453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.301633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.301666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.301848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.301881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.302070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.302103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.302296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.302331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.621 qpair failed and we were unable to recover it. 00:29:19.621 [2024-12-10 14:31:20.302550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.621 [2024-12-10 14:31:20.302584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.302831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.302864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.303068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.303101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.303392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.303428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.303698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.303732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.303977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.304011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.304236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.304271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.304474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.304508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.304805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.304839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.305089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.305140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.305345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.305379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.305604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.305639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.305773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.305807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.306076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.306109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.306377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.306412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.306704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.306738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.307031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.307064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.307337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.307373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.307571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.307611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.307810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.307843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.308065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.308098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.308313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.308347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.308595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.308628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.308910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.308943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.309136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.309169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.309393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.309427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.309625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.309658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.309837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.309871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.310002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.310035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.310308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.310343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.310484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.310517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.310700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.310732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.310959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.310993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.311128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.311162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.311427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.622 [2024-12-10 14:31:20.311462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.622 qpair failed and we were unable to recover it. 00:29:19.622 [2024-12-10 14:31:20.311741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.311775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.311959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.311992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.312137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.312170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.312451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.312485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.312766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.312800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.313081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.313114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.313319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.313353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.313620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.313659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.313971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.314004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.314322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.314357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.314561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.314596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.314775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.314808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.314929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.314960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.315086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.315120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.315313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.315349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.315624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.315659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.315862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.315896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.316025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.316060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.316315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.316348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.316654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.316688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.316945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.316981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.317118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.317152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.317440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.317476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.317660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.317699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.317830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.317864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.318077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.318111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.318312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.318347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.318535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.318569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.318783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.318816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.319091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.319124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.319333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.319369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.319600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.319633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.319792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.319826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.320100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.320133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.320293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.320328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.320474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.320508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.320693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.320727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.321055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.321088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.321273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.623 [2024-12-10 14:31:20.321308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.623 qpair failed and we were unable to recover it. 00:29:19.623 [2024-12-10 14:31:20.321569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.321602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.321839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.321872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.322079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.322112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.322303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.322338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.322537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.322570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.322751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.322785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.323062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.323095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.323288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.323323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.323525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.323558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.323783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.323818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.323957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.323991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.324210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.324252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.324458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.324492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.324719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.324753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.324957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.324991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.325123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.325158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.325444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.325478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.325754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.325788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.325989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.326022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.326281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.326315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.326511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.326545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.326732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.326765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.327024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.327057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.327276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.327311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.327585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.327625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.327761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.327795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.327977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.328009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.328232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.328268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.328398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.328432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.328636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.328669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.328943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.328977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.329172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.329206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.329424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.329459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.329715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.329749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.330065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.330099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.330301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.330336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.330613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.330646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.330831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.330866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.331030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.624 [2024-12-10 14:31:20.331064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.624 qpair failed and we were unable to recover it. 00:29:19.624 [2024-12-10 14:31:20.331280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.331317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.331497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.331530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.331789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.331823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.332133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.332166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.332310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.332344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.332532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.332567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.332863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.332896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.333155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.333188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.333517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.333551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.333757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.333792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.333935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.333970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.334252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.334289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.334462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.334495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.334774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.334808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.335088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.335121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.335340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.335374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.335528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.335563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.335770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.335804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.335932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.335965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.336099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.336134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.336437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.336473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.336587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.336620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.336881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.336916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.337191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.337235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.337510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.337544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.337832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.337872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.338164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.338200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.338503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.338537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.625 [2024-12-10 14:31:20.338796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.625 [2024-12-10 14:31:20.338830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.625 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.339050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.339083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.339321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.339357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.339563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.339597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.339808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.339843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.339989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.340024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.340214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.340258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.340532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.340566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.340700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.340734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.341009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.341043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.341297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.341332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.341592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.341625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.341829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.341863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.342157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.342191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.342479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.342513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.342792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.342825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.343130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.343165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.343445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.343480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.343692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.343727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.343985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.344019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.344227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.344264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.344539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.344574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.344761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.344795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.344979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.345014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.345310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.345346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.345554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.345587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.902 [2024-12-10 14:31:20.345865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.902 [2024-12-10 14:31:20.345900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.902 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.346109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.346142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.346378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.346414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.346620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.346655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.346908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.346941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.347151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.347185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.347388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.347423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.347724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.347759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.348044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.348078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.348281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.348315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.348625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.348659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.348936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.348975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.349229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.349265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.349517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.349551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.349855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.349888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.350099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.350133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.350337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.350373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.350627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.350661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.350804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.350838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.351018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.351051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.351240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.351275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.351500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.351534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.351788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.351822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.352028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.352064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.352359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.352394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.352539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.352573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.352776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.352810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.353098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.353132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.353346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.353380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.353567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.353601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.353883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.353917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.354182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.354216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.354436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.354472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.354776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.354810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.355087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.355121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.355267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.355303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.355484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.355518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.355826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.355860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.356089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.356124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.903 [2024-12-10 14:31:20.356345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.903 [2024-12-10 14:31:20.356379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.903 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.356658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.356693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.356905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.356940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.357145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.357180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.357494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.357529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.357682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.357716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.358026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.358060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.358200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.358241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.358447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.358481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.358678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.358711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.358935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.358968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.359176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.359210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.359403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.359443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.359700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.359735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.359921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.359955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.360171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.360205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.360543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.360577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.360723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.360757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.361037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.361070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.361373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.361409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.361613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.361647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.361790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.361824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.362094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.362129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.362314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.362348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.362546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.362581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.362859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.362893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.363025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.363059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.363256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.363294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.363500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.363534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.363825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.363859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.363978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.364013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.364251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.364286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.364489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.364523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.364721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.364756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.364949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.364983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.365240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.365276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.365391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.365426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.365683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.365717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.365974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.366007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.366164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.904 [2024-12-10 14:31:20.366198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.904 qpair failed and we were unable to recover it. 00:29:19.904 [2024-12-10 14:31:20.366468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.366504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.366766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.366799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.366982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.367016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.367261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.367297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.367508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.367545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.367749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.367783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.368059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.368093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.368381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.368415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.368639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.368673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.368926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.368960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.369094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.369128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.369335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.369370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.369662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.369701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.369982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.370016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.370242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.370278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.370474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.370507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.370718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.370752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.371008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.371041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.371333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.371369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.371588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.371622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.371810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.371844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.372160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.372195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.372427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.372462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.372608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.372643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.372900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.372934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.373118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.373152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.373447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.373483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.373755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.373789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.374101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.374135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.374321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.374356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.374614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.374648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.374934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.374968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.375272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.375309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.375560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.375594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.375852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.375886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.376189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.376231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.376426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.376459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.376739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.905 [2024-12-10 14:31:20.376773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.905 qpair failed and we were unable to recover it. 00:29:19.905 [2024-12-10 14:31:20.377034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.377069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.377287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.377323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.377572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.377606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.377803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.377837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.377952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.377986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.378189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.378230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.378489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.378523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.378732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.378766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.379063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.379096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.379265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.379301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.379558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.379592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.379788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.379822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.380071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.380105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.380412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.380448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.380609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.380648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.380927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.380962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.381242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.381277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.381415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.381449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.381651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.381684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.381882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.381916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.382171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.382206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.382455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.382489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.382702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.382736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.382926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.382960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.383213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.383260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.383518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.383552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.383683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.383716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.383971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.384006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.384286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.384322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.384632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.384667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.384878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.384911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.385185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.385231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.385358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.385392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.385648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.906 [2024-12-10 14:31:20.385681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.906 qpair failed and we were unable to recover it. 00:29:19.906 [2024-12-10 14:31:20.385951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.385985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.386270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.386305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.386622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.386656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.386862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.386896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.387101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.387135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.387317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.387353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.387559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.387594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.387885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.387965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.388269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.388311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.388616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.388652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.388933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.388967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.389167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.389201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.389514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.389550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.389780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.389814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.390042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.390075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.390259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.390294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.390551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.390584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.390771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.390805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.391088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.391122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.391308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.391344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.391606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.391650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.391907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.391941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.392173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.392207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.392480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.392516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.392753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.392787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.393070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.393104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.393387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.393424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.393708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.393742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.394018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.394052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.394248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.394283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.394621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.394655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.394911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.394945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.395206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.395252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.395452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.395486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.395801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.395836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.396107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.396141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.396430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.396465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.396791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.396825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.397020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.397054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.907 qpair failed and we were unable to recover it. 00:29:19.907 [2024-12-10 14:31:20.397329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.907 [2024-12-10 14:31:20.397364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.397559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.397593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.397837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.397871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.398090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.398124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.398314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.398349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.398572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.398606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.398801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.398835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.399022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.399056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.399341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.399377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.399519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.399553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.399751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.399785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.399976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.400010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.400237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.400273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.400469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.400503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.400695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.400728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.400940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.400975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.401172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.401207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.401477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.401515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.401657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.401692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.401886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.401920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.402113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.402147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.402355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.402397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.402675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.402709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.402856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.402889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.403047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.403080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.403336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.403373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.403620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.403653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.403869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.403903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.404092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.404128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.404389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.404426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.404566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.404600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.404876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.404910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.405216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.405257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.405527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.405562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.405749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.405783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.406097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.406132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.406416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.406452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.406731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.406766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.407039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.407075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.908 [2024-12-10 14:31:20.407316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.908 [2024-12-10 14:31:20.407353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.908 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.407561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.407595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.407829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.407863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.408001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.408035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.408248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.408284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.408427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.408462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.408608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.408642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.408800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.408834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.409032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.409068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.409265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.409300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.409585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.409619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.409828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.409863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.410122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.410155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.410366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.410403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.410669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.410703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.410998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.411034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.411245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.411282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.411543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.411578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.411860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.411895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.412107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.412141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.412371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.412407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.412605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.412639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.412942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.412983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.413170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.413204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.413472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.413507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.413719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.413754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.414017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.414050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.414180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.414216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.414511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.414546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.414754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.414787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.414987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.415021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.415203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.415255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.415447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.415481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.415746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.415780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.416054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.416088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.416385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.416420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.416621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.416656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.416862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.416897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.417097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.417131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.417316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.909 [2024-12-10 14:31:20.417351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.909 qpair failed and we were unable to recover it. 00:29:19.909 [2024-12-10 14:31:20.417651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.417686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.417816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.417850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.418011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.418046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.418301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.418336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.418542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.418576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.418713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.418749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.419044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.419079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.419383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.419420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.419557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.419591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.419906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.419986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.420298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.420341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.420575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.420612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.420842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.420877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.421161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.421196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.421445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.421480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.421682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.421717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.421871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.421906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.422139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.422174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.422377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.422412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.422702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.422737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.422926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.422961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.423093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.423129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.423341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.423388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.423601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.423636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.423890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.423925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.424185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.424230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.424509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.424544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.424762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.424797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.425103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.425138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.425389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.425424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.425635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.425670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.425875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.425909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.426116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.426152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.426320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.426358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.426557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.426592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.426859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.426894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.427097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.427132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.427261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.427297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.427430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.910 [2024-12-10 14:31:20.427464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.910 qpair failed and we were unable to recover it. 00:29:19.910 [2024-12-10 14:31:20.427744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.427777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.427914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.427950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.428170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.428204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.428418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.428453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.428604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.428640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.428923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.428957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.429102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.429137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.429417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.429451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.429589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.429625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.429827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.429862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.430146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.430182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.430325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.430361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.430590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.430623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.430762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.430796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.430947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.430983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.431117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.431152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.431361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.431398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.431651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.431685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.431809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.431844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.432036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.432071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.432265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.432301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.432522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.432557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.432760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.432796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.432934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.432969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.433106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.433142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.433341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.433377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.433523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.433557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.433765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.433799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.433998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.434033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.434216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.911 [2024-12-10 14:31:20.434264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.911 qpair failed and we were unable to recover it. 00:29:19.911 [2024-12-10 14:31:20.434492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.434527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.434802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.434836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.435036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.435071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.435271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.435308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.435450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.435485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.435689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.435724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.435986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.436022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.436162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.436198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.436414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.436451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.436669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.436704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.436833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.436867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.437004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.437040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.437302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.437339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.437622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.437657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.437776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.437811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.437931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.437967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.438186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.438233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.438426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.438460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.438607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.438642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.438756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.438792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.438974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.439014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.439143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.439179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.439386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.439422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.439560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.439595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.439752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.439786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.439976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.440011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.440150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.440186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.440308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.440342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.440536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.440571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.440773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.440808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.441001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.441036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.441156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.441192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.441424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.441459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.441688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.441722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.441958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.441994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.442201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.442249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.442438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.442471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.442654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.442687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.442905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.442939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.912 qpair failed and we were unable to recover it. 00:29:19.912 [2024-12-10 14:31:20.443135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.912 [2024-12-10 14:31:20.443170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.443343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.443378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.443599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.443633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.443761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.443794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.443993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.444028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.444251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.444294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.444483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.444517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.444657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.444691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.444818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.444851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.445040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.445073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.445183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.445226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.445428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.445462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.445765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.445799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.445946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.445979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.446107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.446140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.446341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.446377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.446523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.446556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.446692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.446724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.446930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.446963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.447164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.447198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.447483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.447516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.447787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.447828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.448024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.448057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.448314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.448349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.448532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.448566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.448729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.448764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.448971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.449004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.449113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.449148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.449292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.449327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.449519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.449552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.449738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.449771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.449960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.449994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.450203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.450247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.450366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.450399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.450583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.450617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.450902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.450936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.451122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.451156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.451407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.451442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.451631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.913 [2024-12-10 14:31:20.451664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.913 qpair failed and we were unable to recover it. 00:29:19.913 [2024-12-10 14:31:20.451950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.451985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.452118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.452152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.452297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.452331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.452624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.452657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.452843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.452877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.453078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.453113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.453242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.453278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.453479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.453513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.453719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.453753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.453892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.453926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.454125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.454159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.454379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.454415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.454566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.454600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.454716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.454750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.454864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.454897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.455083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.455117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.455338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.455374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.455495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.455528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.455661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.455694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.455832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.455866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.456054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.456088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.456232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.456268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.456449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.456488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.456698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.456731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.456850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.456884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.457011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.457045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.457241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.457278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.457474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.457509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.457621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.457655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.457787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.457822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.457965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.458007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.458277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.458314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.458428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.458463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.458611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.458645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.458763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.458798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.458927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.458960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.459167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.459201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.459416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.459450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.459644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.459676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.914 [2024-12-10 14:31:20.459956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.914 [2024-12-10 14:31:20.459990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.914 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.460196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.460240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.460492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.460526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.460705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.460738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.460916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.460949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.461126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.461160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.461384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.461419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.461604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.461638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.461830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.461865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.461991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.462025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.462351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.462387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.462612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.462645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.462865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.462899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.463040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.463073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.463322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.463357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.463490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.463523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.463649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.463681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.463930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.463964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.464071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.464105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.464224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.464258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.464420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.464466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.464653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.464687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.464863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.464898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.465018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.465063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.465246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.465281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.465462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.465496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.465681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.465713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.465825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.465858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.466053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.466087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.466265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.466301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.466431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.466464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.466587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.466620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.466797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.466831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.467011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.467045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.467321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.467356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.915 [2024-12-10 14:31:20.467626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.915 [2024-12-10 14:31:20.467660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.915 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.467780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.467813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.468094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.468128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.468391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.468426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.468699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.468732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.468845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.468877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.469183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.469215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.469500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.469536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.469780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.469813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.469938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.469971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.470245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.470279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.470529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.470562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.470764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.470797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.471047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.471081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.471325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.471360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.471547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.471580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.471786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.471819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.472094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.472128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.472414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.472449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.472719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.472753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.472870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.472903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.473174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.473209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.473483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.473518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.473710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.473744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.473952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.473986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.474261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.474294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.474545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.474581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.474835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.474869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.474990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.475030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.475305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.475341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.475641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.475676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.475925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.475958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.476155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.476189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.476464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.476497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.476698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.476732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.476911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.476944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.477236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.477270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.477455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.477488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.477667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.477701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.916 [2024-12-10 14:31:20.477900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.916 [2024-12-10 14:31:20.477933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.916 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.478136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.478170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.478305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.478340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.478615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.478650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.478786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.478819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.478957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.478991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.479208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.479253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.479525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.479558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.479771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.479805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.480102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.480136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.480336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.480372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.480498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.480532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.480662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.480697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.480992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.481025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.481340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.481376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.481570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.481604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.481913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.481947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.482170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.482203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.482337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.482372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.482494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.482528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.482811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.482845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.482981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.483015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.483237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.483272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.483459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.483493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.483684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.483719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.484023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.484056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.484188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.484232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.484535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.484570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.484837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.484870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.485150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.485190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.485402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.485438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.485687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.485721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.485998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.486033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.486314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.486351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.486545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.486578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.486760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.486794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.487085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.487120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.487301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.487335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.487599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.487634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.487761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.917 [2024-12-10 14:31:20.487794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.917 qpair failed and we were unable to recover it. 00:29:19.917 [2024-12-10 14:31:20.488009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.488044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.488242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.488278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.488461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.488500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.488778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.488813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.488995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.489029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.489299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.489333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.489525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.489570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.489761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.489795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.490012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.490045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.490249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.490285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.490467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.490502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.490635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.490669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.490933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.490966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.491152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.491186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.491350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.491386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.491665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.491699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.491828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.491862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.492113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.492147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.492444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.492480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.492684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.492719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.492914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.492960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.493148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.493182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.493388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.493422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.493692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.493726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.493931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.493964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.494249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.494285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.494565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.494599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.494853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.494887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.495084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.495119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.495373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.495413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.495686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.495719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.495994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.496028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.496348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.496383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.496683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.496719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.496963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.496997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.497196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.497240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.497437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.497470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.497607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.497640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.497967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.918 [2024-12-10 14:31:20.498001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.918 qpair failed and we were unable to recover it. 00:29:19.918 [2024-12-10 14:31:20.498199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.498243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.498498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.498533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.498833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.498867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.499078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.499111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.499299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.499335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.499589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.499622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.499876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.499910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.500127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.500162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.500295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.500331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.500637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.500671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.500919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.500954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.501158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.501192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.501394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.501429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.501574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.501610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.501805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.501841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.502001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.502035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.502243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.502277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.502413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.502449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.502680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.502714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.503045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.503079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.503273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.503308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.503439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.503473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.503700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.503733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.504005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.504040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.504258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.504294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.504486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.504519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.504656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.504690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.504909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.504944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.505089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.505122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.505407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.505442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.505731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.505775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.505963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.505997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.506198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.506244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.506499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.506535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.506732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.506767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.507043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.507077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.507345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.507382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.507581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.507615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.507765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.507799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.919 qpair failed and we were unable to recover it. 00:29:19.919 [2024-12-10 14:31:20.508073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.919 [2024-12-10 14:31:20.508109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.508366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.508403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.508702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.508738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.508921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.508954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.509236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.509273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.509560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.509595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.509871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.509904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.510108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.510141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.510256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.510292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.510483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.510517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.510768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.510802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.510953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.510988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.511191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.511238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.511493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.511528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.511812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.511848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.512154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.512188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.512445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.512524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.512807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.512845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.513168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.513254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.513539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.513578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.513854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.513889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.514113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.514149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.514347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.514383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.514662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.514697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.514943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.514976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.515117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.515152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.515283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.515317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.515564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.515599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.515828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.515863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.516116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.516150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.516360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.516395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.516582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.516623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.516842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.516876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.517178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.517212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.517533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.517568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.920 qpair failed and we were unable to recover it. 00:29:19.920 [2024-12-10 14:31:20.517874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.920 [2024-12-10 14:31:20.517908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.518047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.518080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.518306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.518343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.518557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.518592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.518868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.518903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.519188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.519231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.519496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.519531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.519854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.519889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.520074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.520109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.520366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.520402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.520626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.520661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.520917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.520951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.521142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.521175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.521391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.521425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.521630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.521665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.521916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.521949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.522130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.522165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.522402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.522438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.522718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.522752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.523058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.523092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.523401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.523436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.523716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.523752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.523960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.523993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.524156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.524191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.524484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.524518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.524790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.524824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.525027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.525061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.525315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.525351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.525533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.525568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.525843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.525877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.525998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.526034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.526315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.526351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.526540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.526574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.526889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.526923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.527113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.527147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.527409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.527444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.527603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.527643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.527944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.527978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.528259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.528296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.921 qpair failed and we were unable to recover it. 00:29:19.921 [2024-12-10 14:31:20.528510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.921 [2024-12-10 14:31:20.528545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.528775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.528809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.528991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.529024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.529293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.529329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.529530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.529565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.529844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.529878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.530181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.530215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.530525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.530560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.530823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.530857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.531111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.531146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.531401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.531437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.531728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.531763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.531911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.531944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.532097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.532131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.532343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.532379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.532515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.532550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.532825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.532859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.533060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.533095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.533303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.533338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.533612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.533647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.533906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.533941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.534144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.534177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.534310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.534344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.534556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.534590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.534982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.535064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.535295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.535335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.535481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.535517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.535707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.535742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.535874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.535910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.536135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.536170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.536423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.536459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.536719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.536753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.536940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.536974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.537096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.537130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.537410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.537447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.537654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.537688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.537980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.538016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.538202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.538260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.538451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.538488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-12-10 14:31:20.538693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.922 [2024-12-10 14:31:20.538728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.539000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.539033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.539291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.539330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.539467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.539501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.539698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.539733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.539949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.539984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.540107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.540140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.540395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.540431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.540565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.540599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.540854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.540888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.541145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.541180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.541392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.541429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.541630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.541665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.541847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.541881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.542156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.542191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.542343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.542379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.542568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.542602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.542918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.542952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.543245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.543282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.543552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.543587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.543732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.543766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.543952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.543988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.544249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.544286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.544570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.544605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.544809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.544844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.545110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.545190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.545507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.545585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.545838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.545880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.546072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.546108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.546291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.546329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.546610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.546644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.546828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.546862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.547064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.547097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.547235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.547271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.547481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.547516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.547792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.547826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.548109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.548144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.548418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.548455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.548657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.548692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.548929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.923 [2024-12-10 14:31:20.548964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-12-10 14:31:20.549227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.549264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.549471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.549506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.549763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.549798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.550060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.550095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.550295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.550330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.550597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.550632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.550841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.550876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.551174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.551209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.551431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.551467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.551604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.551638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.551900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.551934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.552208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.552270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.552563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.552599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.552901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.552935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.553193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.553240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.553439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.553474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.553683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.553717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.553932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.553968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.554245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.554281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.554534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.554567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.554787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.554823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.555044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.555077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.555209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.555257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.555509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.555540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.555796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.555830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.556051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.556092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.556280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.556315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.556574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.556609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.556809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.556843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.557031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.557065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.557318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.557353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.557541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.557575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.557783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.557818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.558002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.558035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.558238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.924 [2024-12-10 14:31:20.558274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-12-10 14:31:20.558565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.558599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.558869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.558903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.559197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.559244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.559432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.559466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.559760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.559793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.559945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.559979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.560165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.560199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.560400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.560435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.560760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.560795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.561073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.561108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.561246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.561281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.561486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.561521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.561725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.561760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.562064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.562098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.562397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.562435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.562697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.562731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.562936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.562971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.563238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.563275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.563550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.563584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.563839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.563873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.564179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.564214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.564430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.564465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.564661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.564695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.564974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.565009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.565196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.565239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.565492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.565527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.565809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.565843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.566032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.566067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.566264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.566300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.566532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.566566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.566765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.566805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.566988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.567023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.567163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.567197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.567411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.567446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.567651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.567687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.567963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.567999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.568183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.568242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.568513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.568548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.568815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.568850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.925 [2024-12-10 14:31:20.569143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-12-10 14:31:20.569179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.925 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.569472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.569507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.569656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.569692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.569946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.569981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.570271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.570306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.570574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.570609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.570868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.570901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.571105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.571140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.571327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.571363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.571640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.571675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.571824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.571860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.572020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.572055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.572259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.572294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.572507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.572543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.572845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.572881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.573071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.573105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.573387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.573423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.573701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.573735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.573926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.573962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.574164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.574199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.574372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.574408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.574604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.574639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.574838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.574874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.575058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.575093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.575376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.575414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.575599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.575633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.575825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.575859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.576135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.576170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.576458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.576493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.576797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.576832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.577107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.577143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.577344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.577387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.577676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.577711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.577895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.577931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.578239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.578276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.578559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.578594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.578874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.578909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.579193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.579237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.579509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-12-10 14:31:20.579544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.926 qpair failed and we were unable to recover it. 00:29:19.926 [2024-12-10 14:31:20.579772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.579807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.580070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.580105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.580295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.580331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.580538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.580572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.580783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.580817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.580934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.580968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.581163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.581199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.581489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.581523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.581726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.581761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.581993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.582029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.582224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.582260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.582569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.582603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.582807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.582842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.583049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.583083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.583347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.583385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.583578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.583613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.583749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.583784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.584013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.584047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.584242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.584278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.584544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.584580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.584764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.584799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.584936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.584972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.585155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.585190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.585488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.585524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.585730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.585765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.585891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.585926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.586112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.586147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.586361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.586397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.586626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.586661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.586918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.586952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.587179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.587213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.587505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.587540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.587760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.587801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.588086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.588121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.588396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.588432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.588639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.588674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.588902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.588937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.589229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.589264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.589450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-12-10 14:31:20.589484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.927 qpair failed and we were unable to recover it. 00:29:19.927 [2024-12-10 14:31:20.589669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.589704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.589999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.590034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.590227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.590263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.590484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.590520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.590715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.590750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.590984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.591019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.591322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.591358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.591495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.591530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.591807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.591841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.592149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.592184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.592535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.592615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.592922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.592961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.593203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.593256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.593531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.593567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.593782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.593817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.593964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.594000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.594231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.594268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.594470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.594505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.594762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.594797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.594927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.594962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.595172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.595208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.595424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.595460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.595662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.595696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.595830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.595865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.596024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.596059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.596338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.596374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.596611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.596646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.596859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.596895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.597074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.597108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.597296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.597332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.597612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.597646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.597866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.597901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.598187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.598230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.598500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.598542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.598803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.598838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.928 qpair failed and we were unable to recover it. 00:29:19.928 [2024-12-10 14:31:20.599032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-12-10 14:31:20.599067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.599197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.599243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.599526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.599560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.599827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.599863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.600156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.600191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.600342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.600378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.600661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.600696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.600996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.601029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.601277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.601314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.601643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.601678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.601981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.602015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.602298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.602335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.602634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.602670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.602954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.602988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.603200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.603246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.603527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.603561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.603711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.603745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.603954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.603988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.604214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.604258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.604517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.604551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.604817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.604852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.605042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.605077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.605358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.605394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.605684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.605718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.606012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.606046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.606232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.606268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.606552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.606587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.606883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.606918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.607133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.607168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.607482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.607517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.607764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.607798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.607987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.608022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.608300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.608335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.608601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.608635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.608959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.608994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.609203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.609245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.609518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.609552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.609784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.609820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.610047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.929 [2024-12-10 14:31:20.610087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.929 qpair failed and we were unable to recover it. 00:29:19.929 [2024-12-10 14:31:20.610292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.610328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.610466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.610499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.610758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.610792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.611049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.611084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.611314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.611350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.611540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.611574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.611793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.611828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.612021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.612056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.612242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.612278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.612558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.612593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.612725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.612759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.612892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.612926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.613191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.613235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.613498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.613533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.613822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.613856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.614056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.614090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.614404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.614441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.614575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.614609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.614798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.614833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.615027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.615062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.615361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.615397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.615677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.615711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.615928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.615963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.616151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.616186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.616425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.616459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.616648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.616683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.616989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.617071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.617383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.617426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.617632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.617667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.617968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.618003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.618194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.618245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.618531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.618565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.618792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.618827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.619110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.619145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.619402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.619439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.619738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.619772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.620035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.620070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.620375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.620410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.620661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.620696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.930 qpair failed and we were unable to recover it. 00:29:19.930 [2024-12-10 14:31:20.620905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.930 [2024-12-10 14:31:20.620949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.931 qpair failed and we were unable to recover it. 00:29:19.931 [2024-12-10 14:31:20.621136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.931 [2024-12-10 14:31:20.621170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.931 qpair failed and we were unable to recover it. 00:29:19.931 [2024-12-10 14:31:20.621442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.931 [2024-12-10 14:31:20.621478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.931 qpair failed and we were unable to recover it. 00:29:19.931 [2024-12-10 14:31:20.621751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.931 [2024-12-10 14:31:20.621785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.931 qpair failed and we were unable to recover it. 00:29:19.931 [2024-12-10 14:31:20.622063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.931 [2024-12-10 14:31:20.622097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.931 qpair failed and we were unable to recover it. 00:29:19.931 [2024-12-10 14:31:20.622345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.931 [2024-12-10 14:31:20.622380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.931 qpair failed and we were unable to recover it. 00:29:19.931 [2024-12-10 14:31:20.622637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.931 [2024-12-10 14:31:20.622670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.931 qpair failed and we were unable to recover it. 00:29:19.931 [2024-12-10 14:31:20.622785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.931 [2024-12-10 14:31:20.622820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.931 qpair failed and we were unable to recover it. 00:29:19.931 [2024-12-10 14:31:20.623042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.931 [2024-12-10 14:31:20.623076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.931 qpair failed and we were unable to recover it. 00:29:19.931 [2024-12-10 14:31:20.623185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.931 [2024-12-10 14:31:20.623230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.931 qpair failed and we were unable to recover it. 00:29:19.931 [2024-12-10 14:31:20.623456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.931 [2024-12-10 14:31:20.623491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.931 qpair failed and we were unable to recover it. 00:29:19.931 [2024-12-10 14:31:20.623788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.931 [2024-12-10 14:31:20.623823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.931 qpair failed and we were unable to recover it. 00:29:19.931 [2024-12-10 14:31:20.624045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.931 [2024-12-10 14:31:20.624080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:19.931 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.624263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.624300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.624504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.624539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.624791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.624826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.624982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.625017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.625276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.625312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.625624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.625658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.625844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.625877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.626100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.626135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.626368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.626403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.626677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.626712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.626865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.626899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.627116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.627150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.627430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.627466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.627730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.627764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.627998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.628034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.628155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.628189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.628398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.628432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.628628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.207 [2024-12-10 14:31:20.628662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.207 qpair failed and we were unable to recover it. 00:29:20.207 [2024-12-10 14:31:20.628940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.628974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.629255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.629293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.629553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.629587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.629728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.629762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.630014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.630048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.630271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.630308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.630426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.630461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.630575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.630609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.630806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.630841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.631024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.631065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.631249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.631286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.631566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.631600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.631881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.631916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.632138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.632172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.632384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.632418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.632625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.632659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.632853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.632888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.633073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.633108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.633250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.633286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.633491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.633524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.633736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.633769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.634077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.634111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.634298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.634332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.634574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.634609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.634790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.634825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.635106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.635140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.635401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.635437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.635733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.635768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.635963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.635997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.636185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.636227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.636414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.636448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.636705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.636739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.636951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.636986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.637183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.637224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.637528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.637561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.637838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.637872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.638081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.638117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.638311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.638347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.638604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.638637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.208 qpair failed and we were unable to recover it. 00:29:20.208 [2024-12-10 14:31:20.638823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.208 [2024-12-10 14:31:20.638856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.639138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.639172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.639473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.639508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.639786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.639820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.640046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.640079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.640211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.640258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.640442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.640476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.640682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.640716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.640897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.640931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.641134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.641168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.641369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.641409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.641618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.641653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.641854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.641888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.642101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.642135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.642278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.642315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.642590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.642625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.642811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.642845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.643050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.643084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.643296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.643332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.643590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.643624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.643812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.643847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.644031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.644065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.644340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.644375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.644601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.644637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.644781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.644815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.645026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.645059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.645258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.645295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.645558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.645593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.645862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.645896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.646183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.646233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.646419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.646455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.646687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.646721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.646919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.646952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.647252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.647288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.647584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.647617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.647836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.647870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.648088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.648122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.648391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.648428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.648721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.209 [2024-12-10 14:31:20.648754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.209 qpair failed and we were unable to recover it. 00:29:20.209 [2024-12-10 14:31:20.649042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.649077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.649350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.649386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.649584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.649618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.649879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.649913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.650212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.650259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.650554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.650587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.650862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.650896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.651232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.651268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.651563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.651597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.651804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.651838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.651980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.652013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.652307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.652349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.652538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.652572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.652816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.652850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.653037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.653072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.653279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.653314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.653567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.653602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.653833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.653867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.654151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.654186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.654482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.654517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.654803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.654836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.654980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.655014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.655211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.655273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.655460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.655495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.655777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.655811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.656021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.656057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.656339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.656376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.656579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.656613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.656795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.656830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.657090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.657125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.657311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.657346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.657580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.657614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.657841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.657876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.658062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.658096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.658237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.658272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.658547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.658582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.658853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.658887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.659110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.659144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.659338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.659375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.210 qpair failed and we were unable to recover it. 00:29:20.210 [2024-12-10 14:31:20.659490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.210 [2024-12-10 14:31:20.659524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.659843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.659878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.660168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.660203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.660434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.660468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.660647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.660681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.660808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.660841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.661160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.661195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.661416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.661450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.661702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.661737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.662008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.662042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.662189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.662234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.662446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.662481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.662765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.662811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.662998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.663031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.663325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.663361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.663482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.663516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.663709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.663742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.663953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.663987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.664270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.664307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.664512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.664546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.664666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.664700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.664926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.664960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.665167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.665202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.665363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.665398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.665583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.665617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.665907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.665941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.666236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.666273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.666542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.666576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.666787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.666822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.667033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.667067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.667278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.667314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.667593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.667627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.667850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.667884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.668082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.668117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.211 [2024-12-10 14:31:20.668301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.211 [2024-12-10 14:31:20.668336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.211 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.668590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.668623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.668740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.668775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.668963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.668997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.669254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.669290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.669574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.669611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.669921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.669955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.670216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.670261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.670457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.670493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.670700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.670735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.671003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.671038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.671320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.671356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.671544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.671578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.671776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.671810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.672040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.672074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.672288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.672339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.672643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.672677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.672885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.672920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.673116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.673150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.673386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.673421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.673612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.673646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.673832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.673866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.674145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.674179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.674468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.674503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.674717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.674751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.674981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.675015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.675271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.675307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.675433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.675468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.675742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.675776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.675995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.676030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.676185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.676228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.676521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.676556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.676858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.676892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.677162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.677197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.677414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.677448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.677724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.677757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.678054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.678089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.678370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.678405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.678617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.678651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.212 [2024-12-10 14:31:20.678912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.212 [2024-12-10 14:31:20.678945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.212 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.679129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.679163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.679465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.679502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.679755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.679788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.680044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.680079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.680239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.680276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.680496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.680535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.680717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.680752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.681051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.681086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.681373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.681409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.681685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.681720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.681927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.681961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.682155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.682189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.682473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.682507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.682748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.682783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.683020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.683053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.683262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.683298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.683572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.683607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.683816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.683850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.684132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.684167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.684434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.684471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.684765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.684798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.685006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.685040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.685305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.685342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.685550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.685585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.685813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.685847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.686078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.686113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.686254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.686289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.686444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.686478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.686737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.686771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.686980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.687015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.687284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.687320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.687454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.687487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.687795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.687829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.688028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.688062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.688248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.688282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.688486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.688522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.688715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.688750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.688951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.688986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.689198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.213 [2024-12-10 14:31:20.689245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.213 qpair failed and we were unable to recover it. 00:29:20.213 [2024-12-10 14:31:20.689384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.689419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.689615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.689650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.689861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.689895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.690082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.690117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.690313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.690349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.690560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.690594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.690861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.690901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.691190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.691244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.691492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.691528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.691730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.691764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.692079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.692114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.692389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.692425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.692576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.692611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.692841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.692876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.692993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.693027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.693253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.693291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.693557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.693592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.693776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.693811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.694043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.694078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.694279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.694315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.694584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.694620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.694905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.694940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.695233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.695269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.695456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.695491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.695763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.695797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.696073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.696107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.696423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.696460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.696693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.696727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.697002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.697036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.697176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.697211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.697421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.697456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.697653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.697687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.697882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.697916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.698198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.698247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.698540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.698575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.698884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.698918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.699100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.699136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.699417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.699454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.699641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.214 [2024-12-10 14:31:20.699676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.214 qpair failed and we were unable to recover it. 00:29:20.214 [2024-12-10 14:31:20.699937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.699972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.700192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.700239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.700472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.700506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.700772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.700807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.701106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.701142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.701283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.701320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.701488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.701523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.701708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.701756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.701965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.701999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.702256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.702292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.702419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.702454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.702678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.702713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.702971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.703006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.703271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.703308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.703514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.703549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.703833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.703867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.704079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.704114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.704239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.704275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.704485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.704518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.704755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.704788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.705094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.705129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.705343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.705378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.705566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.705599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.705755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.705803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.706088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.706123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.706364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.706400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.706688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.706723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.706914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.706948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.707154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.707188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.707399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.707435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.707580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.707613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.707809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.707844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.708039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.708073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.708272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.708309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.708512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.708547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.708804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.708836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.709140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.709173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.709366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.709402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.709618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.215 [2024-12-10 14:31:20.709652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.215 qpair failed and we were unable to recover it. 00:29:20.215 [2024-12-10 14:31:20.709858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.709894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.710078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.710112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.710369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.710405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.710623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.710658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.710854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.710887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.711072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.711107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.711318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.711353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.711636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.711669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.711924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.711965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.712096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.712129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.712338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.712374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.712562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.712596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.712895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.712930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.713234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.713270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.713478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.713512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.713721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.713756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.714062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.714095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.714376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.714413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.714597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.714631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.714775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.714808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.714940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.714975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.715177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.715211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.715433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.715467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.715675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.715708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.715915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.715950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.716168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.716201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.716490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.716526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.716777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.716811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.716995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.717030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.717330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.717366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.717637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.717672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.717964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.717998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.718266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.718301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.718494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.718527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.718731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.216 [2024-12-10 14:31:20.718766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.216 qpair failed and we were unable to recover it. 00:29:20.216 [2024-12-10 14:31:20.718956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.718990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.719313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.719349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.719629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.719663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.719940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.719974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.720284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.720320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.720501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.720535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.720841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.720875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.721057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.721092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.721323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.721358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.721643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.721678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.721923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.721957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.722237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.722272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.722529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.722563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.722780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.722820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.723078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.723113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.723388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.723424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.723610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.723643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.723857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.723891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.724030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.724063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.724359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.724394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.724586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.724620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.724807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.724840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.725096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.725131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.725315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.725349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.725468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.725501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.725758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.725792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.726006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.726042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.726357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.726392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.726581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.726615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.726821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.726856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.727062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.727096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.727370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.727405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.727604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.727639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.727779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.727814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.728068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.728102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.728330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.728366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.728625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.728659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.728885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.728919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.729199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.217 [2024-12-10 14:31:20.729246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.217 qpair failed and we were unable to recover it. 00:29:20.217 [2024-12-10 14:31:20.729518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.729552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.729803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.729837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.729978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.730013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.730200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.730247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.730461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.730495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.730683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.730717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.730941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.730976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.731160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.731195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.731492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.731528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.731811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.731845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.732051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.732085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.732293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.732330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.732585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.732619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.732876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.732910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.733215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.733268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.733526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.733560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.733770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.733804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.734078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.734113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.734252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.734288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.734501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.734536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.734749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.734784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.735001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.735034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.735243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.735277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.735461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.735496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.735751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.735785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.735982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.736016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.736292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.736327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.736621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.736656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.736938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.736973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.737250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.737285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.737588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.737623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.737834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.737868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.738142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.738176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.738336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.738371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.738569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.738604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.738792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.738825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.739051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.739086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.739275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.739311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.739503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.218 [2024-12-10 14:31:20.739537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.218 qpair failed and we were unable to recover it. 00:29:20.218 [2024-12-10 14:31:20.739840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.739874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.740204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.740252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.740538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.740574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.740836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.740870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.741171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.741205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.741417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.741452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.741646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.741681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.741900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.741935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.742143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.742176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.742389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.742424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.742690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.742725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.742937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.742971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.743248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.743283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.743539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.743573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.743879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.743913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.744121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.744162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.744449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.744484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.744784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.744818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.745082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.745117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.745323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.745358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.745660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.745695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.745853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.745888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.746155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.746189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.746413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.746449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.746678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.746712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.746996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.747029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.747245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.747281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.747509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.747544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.747821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.747855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.748088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.748123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.748315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.748350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.748552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.748585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.748773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.748808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.749084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.749119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.749325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.749361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.749498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.749532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.749737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.749771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.750077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.750112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.750248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.219 [2024-12-10 14:31:20.750284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.219 qpair failed and we were unable to recover it. 00:29:20.219 [2024-12-10 14:31:20.750490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.750525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.750748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.750782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.751004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.751038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.751263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.751301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.751588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.751622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.751906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.751939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.752195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.752240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.752509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.752544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.752828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.752862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.753072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.753107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.753364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.753401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.753662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.753696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.753950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.753984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.754198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.754243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.754498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.754531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.754720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.754754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.755035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.755077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.755353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.755390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.755610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.755645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.755925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.755959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.756238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.756274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.756526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.756561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.756843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.756877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.757157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.757191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.757338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.757374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.757593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.757628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.757912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.757946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.758079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.758114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.758417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.758452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.758714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.758748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.758974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.759010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.759194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.759237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.759505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.759539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.759808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.759842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.760135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.760170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.220 qpair failed and we were unable to recover it. 00:29:20.220 [2024-12-10 14:31:20.760440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.220 [2024-12-10 14:31:20.760476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.760761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.760795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.760992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.761027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.761171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.761205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.761438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.761473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.761728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.761761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.762042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.762076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.762203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.762249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.762394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.762429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.762708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.762742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.763048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.763081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.763362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.763399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.763680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.763714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.763914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.763949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.764162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.764197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.764414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.764448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.764727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.764762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.764948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.764982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.765247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.765283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.765561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.765595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.765797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.765830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.766015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.766055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.766241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.766276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.766532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.766566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.766684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.766717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.766856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.766890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.767099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.767133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.767420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.767456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.767661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.767696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.767950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.767986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.768291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.768326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.768589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.768624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.768849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.768884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.769168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.769202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.769481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.769516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.769715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.769749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.769890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.769925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.770200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.770244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.770544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.770578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.221 qpair failed and we were unable to recover it. 00:29:20.221 [2024-12-10 14:31:20.770778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.221 [2024-12-10 14:31:20.770814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.771002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.771036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.771313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.771350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.771499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.771534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.771649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.771681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.771937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.771971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.772167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.772201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.772432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.772466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.772693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.772727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.773010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.773045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.773319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.773353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.773642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.773676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.773950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.773984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.774273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.774308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.774586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.774621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.774826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.774860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.775135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.775169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.775471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.775507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.775691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.775725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.775992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.776025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.776233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.776269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.776591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.776628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.776885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.776926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.777205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.777253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.777536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.777571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.777708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.777742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.778044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.778078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.778350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.778385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.778572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.778608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.778743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.778778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.779066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.779100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.779357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.779392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.779652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.779687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.779992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.780026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.780287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.780323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.780582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.780616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.780881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.780916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.781127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.781161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.781451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.781486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.222 [2024-12-10 14:31:20.781780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.222 [2024-12-10 14:31:20.781814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.222 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.782038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.782072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.782257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.782292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.782427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.782462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.782673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.782707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.782971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.783005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.783187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.783241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.783507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.783541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.783826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.783860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.784137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.784170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.784465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.784502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.784786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.784820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.785115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.785148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.785373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.785408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.785663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.785698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.786004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.786040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.786318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.786354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.786556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.786591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.786777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.786811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.786999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.787033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.787240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.787275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.787562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.787597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.787880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.787913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.788191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.788242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.788513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.788547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.788825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.788859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.789057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.789091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.789391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.789427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.789633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.789667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.789945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.789978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.790185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.790230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.790515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.790548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.790755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.790789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.791021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.791055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.791366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.791402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.791678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.791713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.791967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.792002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.792212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.792271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.792477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.792512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.792719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.223 [2024-12-10 14:31:20.792753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.223 qpair failed and we were unable to recover it. 00:29:20.223 [2024-12-10 14:31:20.792935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.792969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.793153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.793194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.793492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.793527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.793711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.793745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.793897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.793931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.794152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.794187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.794385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.794419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.794604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.794639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.794783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.794817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.795050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.795085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.795373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.795411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.795600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.795635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.795893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.795926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.796181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.796215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.796484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.796519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.796804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.796837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.797117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.797151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.797436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.797472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.797752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.797786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.798017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.798051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.798239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.798275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.798459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.798493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.798650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.798684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.798907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.798948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.799134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.799168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.799473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.799510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.799700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.799733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.799990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.800024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.800210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.800259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.800531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.800565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.800844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.800877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.801164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.801198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.801409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.801443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.801715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.801749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.801876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.801910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.802040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.802075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.802370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.802405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.802600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.802635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.802838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.224 [2024-12-10 14:31:20.802872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.224 qpair failed and we were unable to recover it. 00:29:20.224 [2024-12-10 14:31:20.803126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.803160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.803467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.803503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.803800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.803834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.804053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.804087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.804201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.804248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.804452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.804487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.804767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.804801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.805101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.805135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.805380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.805416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.805695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.805729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.806036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.806072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.806347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.806385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.806668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.806702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.806915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.806948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.807127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.807161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.807445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.807482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.807749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.807784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.807976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.808011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.808196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.808241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.808497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.808531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.808808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.808842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.809124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.809159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.809440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.809475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.809733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.809768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.810025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.810066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.810351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.810386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.810587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.810621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.810877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.810912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.811041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.811075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.811331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.811367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.811507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.811541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.811798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.811831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.812109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.812144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.812431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.812467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.225 [2024-12-10 14:31:20.812650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.225 [2024-12-10 14:31:20.812684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.225 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.812814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.812849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.813035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.813071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.813346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.813381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.813600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.813635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.813781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.813816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.814031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.814065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.814262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.814297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.814598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.814632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.814852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.814886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.815071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.815105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.815322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.815358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.815614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.815649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.815953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.815987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.816184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.816230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.816509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.816544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.816816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.816851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.817145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.817185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.817439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.817476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.817659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.817694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.817902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.817937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.818241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.818277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.818425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.818460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.818736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.818771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.819055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.819090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.819370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.819407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.819688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.819723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.819927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.819962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.820239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.820276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.820562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.820596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.820778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.820813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.821043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.821079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.821198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.821243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.821548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.821582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.821843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.821878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.822181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.822216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.822478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.822513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.822794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.822829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.823110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.823145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.226 [2024-12-10 14:31:20.823334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.226 [2024-12-10 14:31:20.823370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.226 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.823649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.823683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.823968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.824002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.824280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.824316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.824598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.824631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.824774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.824810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.825007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.825042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.825248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.825283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.825544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.825578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.825840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.825874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.825989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.826023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.826208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.826252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.826531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.826565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.826846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.826881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.827149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.827184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.827503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.827539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.827818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.827853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.828108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.828142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.828416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.828458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.828762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.828797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.828997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.829032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.829239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.829275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.829498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.829534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.829725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.829760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.829874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.829909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.830102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.830137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.830323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.830359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.830584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.830618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.830918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.830953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.831141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.831176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.831418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.831453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.831707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.831742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.831989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.832024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.832305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.832341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.832594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.832630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.832906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.832940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.833244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.833280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.833406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.833441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.833624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.833658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.833838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.227 [2024-12-10 14:31:20.833873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.227 qpair failed and we were unable to recover it. 00:29:20.227 [2024-12-10 14:31:20.834125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.834160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.834362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.834398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.834549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.834584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.834857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.834892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.835101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.835136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.835329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.835365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.835551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.835586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.835865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.835900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.836121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.836156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.836446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.836482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.836752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.836786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.837077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.837111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.837297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.837333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.837547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.837583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.837704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.837738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.837861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.837893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.838176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.838211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.838426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.838462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.838740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.838780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.839058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.839093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.839369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.839406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.839528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.839562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.839815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.839848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.840151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.840185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.840473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.840508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.840704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.840738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.840956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.840989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.841190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.841237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.841427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.841462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.841715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.841749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.841933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.841966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.842248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.842284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.842438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.842473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.842698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.842732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.842914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.842949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.843250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.843285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.843553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.843589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.843866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.843901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.844188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.228 [2024-12-10 14:31:20.844232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.228 qpair failed and we were unable to recover it. 00:29:20.228 [2024-12-10 14:31:20.844520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.844555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.844823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.844857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.845132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.845167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.845385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.845419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.845719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.845753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.846042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.846077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.846321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.846357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.846543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.846578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.846869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.846904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.847178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.847213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.847367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.847403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.847653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.847688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.847907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.847941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.848229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.848265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.848481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.848516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.848629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.848663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.848988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.849023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.849211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.849256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.849454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.849488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.849677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.849717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.849970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.850004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.850137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.850171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.850466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.850501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.850768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.850803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.851096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.851131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.851404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.851440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.851704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.851739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.851963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.851997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.852112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.852146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.852355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.852391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.852645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.852680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.852936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.852970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.853214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.853258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.853562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.853597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.853821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.853856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.229 qpair failed and we were unable to recover it. 00:29:20.229 [2024-12-10 14:31:20.854041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.229 [2024-12-10 14:31:20.854076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.854282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.854318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.854454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.854489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.854702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.854737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.854922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.854957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.855254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.855290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.855502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.855538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.855757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.855792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.856039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.856075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.856362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.856399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.856653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.856689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.856921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.856956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.857150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.857185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.857441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.857476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.857734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.857768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.857999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.858034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.858229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.858265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.858450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.858484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.858692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.858727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.858913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.858948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.859207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.859270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.859460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.859494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.859768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.859802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.859988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.860023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.860291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.860335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.860604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.860638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.860924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.860958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.861237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.861273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.861574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.861608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.861811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.861846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.862027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.862063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.862254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.862289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.862475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.862510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.862641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.862676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.862800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.862834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.863024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.863059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.863266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.863301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.863503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.863539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.863822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.863857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.230 [2024-12-10 14:31:20.864061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.230 [2024-12-10 14:31:20.864096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.230 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.864348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.864384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.864676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.864711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.864983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.865017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.865280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.865315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.865549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.865584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.865837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.865872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.866068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.866102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.866307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.866344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.866550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.866585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.866768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.866803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.866996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.867031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.867164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.867199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.867446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.867482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.867691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.867727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.867912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.867946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.868148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.868183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.868451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.868488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.868759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.868794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.869080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.869114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.869318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.869354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.869634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.869669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.869948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.869982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.870212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.870257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.870468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.870501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.870778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.870820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.871001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.871036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.871234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.871271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.871538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.871573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.871838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.871872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.872142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.872177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.872500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.872537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.872732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.872766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.873043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.873077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.873271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.873308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.873509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.873543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.873731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.873766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.874042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.874077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.874260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.874295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.874563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.231 [2024-12-10 14:31:20.874598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.231 qpair failed and we were unable to recover it. 00:29:20.231 [2024-12-10 14:31:20.874819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.874854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.875126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.875160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.875329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.875367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.875577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.875611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.875889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.875923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.876176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.876212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.876520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.876556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.876754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.876789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.877050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.877085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.877359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.877396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.877529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.877564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.877771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.877805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.878102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.878137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.878288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.878324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.878509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.878542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.878817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.878852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.878986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.879021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.879302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.879338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.879599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.879634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.879857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.879891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.880075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.880109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.880402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.880438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.880628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.880663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.880938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.880973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.881260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.881297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.881495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.881536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.881671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.881705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.881933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.881968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.882150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.882184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.882450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.882485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.882784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.882819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.883082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.883117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.883345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.883382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.883659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.883693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.883980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.884014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.884197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.884240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.884423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.884457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.884710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.884744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.884964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.232 [2024-12-10 14:31:20.884999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.232 qpair failed and we were unable to recover it. 00:29:20.232 [2024-12-10 14:31:20.885212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.885263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.885450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.885485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.885709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.885743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.886046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.886081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.886340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.886375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.886494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.886529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.886752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.886786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.887084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.887119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.887384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.887420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.887691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.887725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.887873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.887908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.888090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.888125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.888351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.888387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.888529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.888564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.888817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.888851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.889067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.889102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.889324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.889362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.889640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.889675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.889872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.889906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.890160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.890195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.890340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.890377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.890657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.890691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.890815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.890849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.890980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.891015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.891195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.891256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.891463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.891498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.891686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.891727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.892003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.892038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.892266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.892302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.892582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.892616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.892811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.892845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.893122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.893157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.893325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.893363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.893551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.893585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.893862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.893897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.894183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.894228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.894507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.233 [2024-12-10 14:31:20.894542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.233 qpair failed and we were unable to recover it. 00:29:20.233 [2024-12-10 14:31:20.894748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.894784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.895087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.895121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.895341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.895377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.895646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.895681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.895911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.895946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.896208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.896257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.896511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.896546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.896678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.896713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.896921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.896956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.897238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.897275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.897461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.897496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.897774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.897808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.898074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.898110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.898386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.898422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.898703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.898738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.898920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.898954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.899153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.899187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.899455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.899491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.899775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.899809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.900120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.900154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.900411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.900446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.900648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.900682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.900960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.900994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.901248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.901285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.901470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.901504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.901700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.901734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.901918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.901953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.902135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.902169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.902373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.902409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.902632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.902673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.902873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.902906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.903129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.903164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.903429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.903465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.903779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.903813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.904008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.904043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.904328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.904364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.904636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.234 [2024-12-10 14:31:20.904671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.234 qpair failed and we were unable to recover it. 00:29:20.234 [2024-12-10 14:31:20.904956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.904991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.905272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.905308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.905592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.905627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.905901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.905949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.906233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.906269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.906456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.906491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.906684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.906718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.906901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.906935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.907123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.907158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.907314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.907349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.907628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.907663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.907875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.907909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.908137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.908171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.908387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.908424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.908629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.908663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.908919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.908952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.909255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.909290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.909554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.909589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.909843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.909876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.910181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.910216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.910373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.910407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.910682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.910716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.911021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.911055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.911320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.911356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.911573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.911608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.911792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.911826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.912112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.912147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.912346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.912382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.912676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.912711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.912995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.913030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.913307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.913343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.913531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.913565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.913827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.913867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.914148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.914183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.914381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.914416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.914673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.914706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.914964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.914998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.915302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.915338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.915538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.235 [2024-12-10 14:31:20.915572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.235 qpair failed and we were unable to recover it. 00:29:20.235 [2024-12-10 14:31:20.915853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.915888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.916188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.916232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.916437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.916471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.916663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.916697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.916894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.916928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.917131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.917165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.917363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.917397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.917630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.917665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.917930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.917964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.918189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.918237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.918542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.918577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.918859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.918892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.919097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.919132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.919265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.919301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.919599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.919635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.919911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.919946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.920206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.920251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.920472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.920507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.920728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.920762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.921043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.921076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.921387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.921422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.921619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.921654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.921842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.921876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.922136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.922169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.922404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.922439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.922707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.922741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.923021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.923054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.923360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.923396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.923597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.923632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.923867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.923902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.924163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.924197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.924415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.924450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.924637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.924671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.924858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.924898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.925096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.925130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.925389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.925424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.925700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.925734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.926037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.926072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.926256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.926291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.236 [2024-12-10 14:31:20.926443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.236 [2024-12-10 14:31:20.926478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.236 qpair failed and we were unable to recover it. 00:29:20.237 [2024-12-10 14:31:20.926780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.237 [2024-12-10 14:31:20.926814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.237 qpair failed and we were unable to recover it. 00:29:20.237 [2024-12-10 14:31:20.926950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.237 [2024-12-10 14:31:20.926985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.237 qpair failed and we were unable to recover it. 00:29:20.237 [2024-12-10 14:31:20.927185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.237 [2024-12-10 14:31:20.927247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.237 qpair failed and we were unable to recover it. 00:29:20.237 [2024-12-10 14:31:20.927472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.237 [2024-12-10 14:31:20.927507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.237 qpair failed and we were unable to recover it. 00:29:20.237 [2024-12-10 14:31:20.927810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.237 [2024-12-10 14:31:20.927843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.237 qpair failed and we were unable to recover it. 00:29:20.237 [2024-12-10 14:31:20.928100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.237 [2024-12-10 14:31:20.928134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.237 qpair failed and we were unable to recover it. 00:29:20.237 [2024-12-10 14:31:20.928394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.237 [2024-12-10 14:31:20.928431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.237 qpair failed and we were unable to recover it. 00:29:20.237 [2024-12-10 14:31:20.928658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.237 [2024-12-10 14:31:20.928694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.237 qpair failed and we were unable to recover it. 00:29:20.237 [2024-12-10 14:31:20.928924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.237 [2024-12-10 14:31:20.928958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.237 qpair failed and we were unable to recover it. 00:29:20.237 [2024-12-10 14:31:20.929142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.237 [2024-12-10 14:31:20.929177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.237 qpair failed and we were unable to recover it. 00:29:20.237 [2024-12-10 14:31:20.929442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.237 [2024-12-10 14:31:20.929477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.237 qpair failed and we were unable to recover it. 00:29:20.237 [2024-12-10 14:31:20.929666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.237 [2024-12-10 14:31:20.929700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.237 qpair failed and we were unable to recover it. 00:29:20.237 [2024-12-10 14:31:20.929826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.237 [2024-12-10 14:31:20.929860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.237 qpair failed and we were unable to recover it. 00:29:20.513 [2024-12-10 14:31:20.930079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.513 [2024-12-10 14:31:20.930113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.513 qpair failed and we were unable to recover it. 00:29:20.513 [2024-12-10 14:31:20.930390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.513 [2024-12-10 14:31:20.930425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.513 qpair failed and we were unable to recover it. 00:29:20.513 [2024-12-10 14:31:20.930649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.513 [2024-12-10 14:31:20.930683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.513 qpair failed and we were unable to recover it. 00:29:20.513 [2024-12-10 14:31:20.930936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.513 [2024-12-10 14:31:20.930970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.513 qpair failed and we were unable to recover it. 00:29:20.513 [2024-12-10 14:31:20.931276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.513 [2024-12-10 14:31:20.931310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.513 qpair failed and we were unable to recover it. 00:29:20.513 [2024-12-10 14:31:20.931586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.513 [2024-12-10 14:31:20.931621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.513 qpair failed and we were unable to recover it. 00:29:20.513 [2024-12-10 14:31:20.931903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.513 [2024-12-10 14:31:20.931939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.513 qpair failed and we were unable to recover it. 00:29:20.513 [2024-12-10 14:31:20.932245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.513 [2024-12-10 14:31:20.932282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.513 qpair failed and we were unable to recover it. 00:29:20.513 [2024-12-10 14:31:20.932468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.513 [2024-12-10 14:31:20.932502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.513 qpair failed and we were unable to recover it. 00:29:20.513 [2024-12-10 14:31:20.932761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.513 [2024-12-10 14:31:20.932795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.513 qpair failed and we were unable to recover it. 00:29:20.513 [2024-12-10 14:31:20.932981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.513 [2024-12-10 14:31:20.933015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.513 qpair failed and we were unable to recover it. 00:29:20.513 [2024-12-10 14:31:20.933232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.513 [2024-12-10 14:31:20.933267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.513 qpair failed and we were unable to recover it. 00:29:20.513 [2024-12-10 14:31:20.933543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.513 [2024-12-10 14:31:20.933578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.513 qpair failed and we were unable to recover it. 00:29:20.513 [2024-12-10 14:31:20.933901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.513 [2024-12-10 14:31:20.933934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.513 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.934141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.934174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.934471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.934507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.934775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.934808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.935100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.935134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.935322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.935357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.935621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.935655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.935783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.935822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.936105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.936139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.936435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.936470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.936780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.936814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.936999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.937033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.937250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.937285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.937540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.937573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.937834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.937868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.938052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.938086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.938275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.938310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.938498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.938532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.938711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.938745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.938955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.938989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.939122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.939156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.939371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.939407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.939623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.939657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.939944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.939978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.940177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.940212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.940506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.940540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.940814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.940849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.941037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.941073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.941254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.941291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.941489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.941523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.941658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.941693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.941911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.941946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.942131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.942164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.942433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.942468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.942747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.942782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.943105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.943139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.943356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.943391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.943668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.943703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.943815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.943848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.514 [2024-12-10 14:31:20.944030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.514 [2024-12-10 14:31:20.944064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.514 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.944361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.944396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.944700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.944734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.944994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.945028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.945233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.945269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.945537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.945571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.945771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.945806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.946010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.946043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.946328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.946369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.946595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.946629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.946818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.946852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.947132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.947165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.947393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.947427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.947577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.947612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.947747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.947780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.947978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.948012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.948202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.948244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.948442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.948477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.948688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.948723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.948924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.948958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.949239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.949274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.949530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.949564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.949775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.949811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.950074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.950108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.950383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.950419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.950706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.950742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.951040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.951073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.951266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.951301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.951444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.951478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.951689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.951723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.951907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.951940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.952121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.952156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.952426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.952462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.952746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.952780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.952910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.952944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.953133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.953173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.953374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.953410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.953624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.953659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.953940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.953974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-12-10 14:31:20.954254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.515 [2024-12-10 14:31:20.954291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.954515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.954549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.954767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.954801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.955002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.955035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.955238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.955273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.955459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.955494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.955766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.955800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.956055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.956090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.956296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.956332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.956471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.956505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.956721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.956755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.957035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.957070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.957347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.957384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.957666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.957700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.957920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.957955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.958159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.958194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.958432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.958467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.958744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.958779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.959063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.959097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.959379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.959415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.959643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.959678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.959924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.959959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.960232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.960268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.960498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.960532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.960811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.960846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.961041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.961075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.961263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.961298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.961553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.961588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.961800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.961833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.962019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.962052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.962247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.962284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.962566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.962601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.962902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.962936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.963140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.963175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.963470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.963506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.963646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.963679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.963929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.963969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.964275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.964310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.964518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.964553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.516 [2024-12-10 14:31:20.964734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.516 [2024-12-10 14:31:20.964768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.516 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.964971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.965006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.965240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.965276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.965555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.965588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.965866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.965899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.966032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.966067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.966375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.966411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.966597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.966632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.966921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.966955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.967211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.967256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.967495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.967529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.967840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.967874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.968090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.968124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.968330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.968364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.968607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.968641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.968845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.968878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.969158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.969193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.969497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.969531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.969790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.969825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.970124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.970159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.970384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.970418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.970667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.970701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.971003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.971037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.971238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.971274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.971484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.971518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.971795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.971830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.972014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.972049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.972240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.972276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.972557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.972592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.972869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.972904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.973099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.973133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.973440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.973477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.973684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.973717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.973991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.974026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.974286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.974321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.974605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.974638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.974868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.974902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.975128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.975169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.517 [2024-12-10 14:31:20.975384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.517 [2024-12-10 14:31:20.975418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.517 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.975642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.975677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.975902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.975937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.976231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.976267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.976537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.976571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.976828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.976862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.976993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.977028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.977256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.977293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.977574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.977610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.977750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.977784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.978086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.978119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.978338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.978372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.978578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.978611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.978867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.978902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.979195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.979238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.979506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.979540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.979822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.979857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.980135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.980170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.980455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.980490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.980710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.980744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.981024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.981059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.981313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.518 [2024-12-10 14:31:20.981347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.518 qpair failed and we were unable to recover it. 00:29:20.518 [2024-12-10 14:31:20.981548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.981582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.981859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.981893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.982096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.982130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.982339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.982374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.982662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.982697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.982975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.983010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.983298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.983333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.983610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.983644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.983871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.983905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.984163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.984197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.984507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.984541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.984749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.984784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.984993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.985027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.985304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.985339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.985597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.985631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.985758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.985792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.985975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.986008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.986270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.986315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.986502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.986536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.986735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.986769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.986987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.987022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.987239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.987276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.987483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.987518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.987699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.987734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.987944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.987979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.519 [2024-12-10 14:31:20.988255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.519 [2024-12-10 14:31:20.988292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.519 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.988573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.988608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.988888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.988923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.989205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.989251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.989437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.989471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.989729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.989763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.990052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.990086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.990207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.990254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.990439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.990474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.990598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.990632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.990819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.990852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.991130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.991165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.991434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.991468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.991691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.991725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.991993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.992026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.992169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.992204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.992366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.992403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.992617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.992651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.992950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.992984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.993250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.993286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.993504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.993541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.993803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.993837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.994133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.994168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.994420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.994455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.994713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.994749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.994968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.995002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.995258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.520 [2024-12-10 14:31:20.995293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.520 qpair failed and we were unable to recover it. 00:29:20.520 [2024-12-10 14:31:20.995491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.995525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:20.995781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.995815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:20.996019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.996054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:20.996239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.996274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:20.996551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.996586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:20.996790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.996831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:20.997112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.997146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:20.997361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.997397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:20.997584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.997617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:20.997804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.997840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:20.998048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.998081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:20.998339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.998375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:20.998598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.998633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:20.998890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.998924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:20.999198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.999241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:20.999520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.999554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:20.999780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.999814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:20.999946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:20.999980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:21.000238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:21.000275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:21.000565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:21.000600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:21.000804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:21.000838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:21.001019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:21.001053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:21.001262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:21.001298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:21.001497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:21.001532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:21.001720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:21.001754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:21.002006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:21.002041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:21.002314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:21.002351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.521 qpair failed and we were unable to recover it. 00:29:20.521 [2024-12-10 14:31:21.002635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.521 [2024-12-10 14:31:21.002669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.002945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.002980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.003183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.003229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.003431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.003466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.003720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.003754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.003956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.003991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.004268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.004304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.004563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.004597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.004886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.004921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.005058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.005092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.005376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.005411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.005592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.005626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.005897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.005932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.006208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.006256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.006533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.006568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.006841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.006875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.007079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.007113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.007301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.007338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.007535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.007576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.007823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.007858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.008142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.008178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.008466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.008503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.008784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.008819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.008959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.008994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.009130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.009166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.009453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.009489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.009760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.009795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.522 qpair failed and we were unable to recover it. 00:29:20.522 [2024-12-10 14:31:21.010091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.522 [2024-12-10 14:31:21.010125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.010309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.010347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.010572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.010606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.010805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.010840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.011144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.011178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.011383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.011419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.011696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.011730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.011982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.012016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.012152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.012186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.012477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.012512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.012705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.012740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.012992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.013026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.013305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.013340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.013547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.013581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.013778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.013812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.014066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.014101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.014408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.014443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.014590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.014625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.014832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.014867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.015141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.015175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.015491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.015527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.015731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.015766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.015981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.016016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.016204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.016250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.016382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.016417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.016603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.016638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.523 [2024-12-10 14:31:21.016879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.523 [2024-12-10 14:31:21.016913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.523 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.017208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.017260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.017543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.017578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.017854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.017888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.018110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.018145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.018383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.018426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.018670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.018705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.018909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.018945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.019248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.019283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.019469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.019505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.019635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.019669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.019872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.019906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.020135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.020169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.020372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.020409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.020668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.020702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.020990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.021024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.021301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.021336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.021528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.021562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.021763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.021797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.021986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.022022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.022317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.022353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.022609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.022644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.022874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.524 [2024-12-10 14:31:21.022907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.524 qpair failed and we were unable to recover it. 00:29:20.524 [2024-12-10 14:31:21.023180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.023214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.023505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.023541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.023814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.023849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.024080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.024114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.024300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.024337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.024617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.024651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.024834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.024869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.025137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.025171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.025392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.025428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.025695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.025730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.026012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.026047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.026269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.026306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.026591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.026626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.026814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.026850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.027125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.027160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.027366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.027402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.027682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.027717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.027927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.027963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.028166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.028201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.028432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.028468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.028671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.028707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.028966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.029000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.029134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.029176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.029464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.029500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.029692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.029726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.029871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.029906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.030135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.030172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.525 [2024-12-10 14:31:21.030335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.525 [2024-12-10 14:31:21.030371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.525 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.030558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.030592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.030791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.030826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.031083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.031118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.031440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.031475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.031762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.031798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.031939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.031973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.032235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.032271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.032536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.032572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.032713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.032748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.033004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.033041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.033263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.033300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.033551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.033587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.033908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.033942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.034073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.034108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.034394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.034431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.034622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.034656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.034844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.034880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.035155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.035191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.035425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.035461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.035740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.035774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.035974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.036010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.036251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.036288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.036547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.036582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.036891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.036926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.037181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.037229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.037481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.526 [2024-12-10 14:31:21.037515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.526 qpair failed and we were unable to recover it. 00:29:20.526 [2024-12-10 14:31:21.037670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.037705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.037903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.037938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.038229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.038265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.038461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.038497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.038781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.038816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.038969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.039006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.039251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.039289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.039404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.039439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.039571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.039617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.039909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.039943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.040233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.040270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.040546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.040582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.040766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.040801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.041107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.041143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.041442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.041479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.041670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.041705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.041915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.041949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.042134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.042170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.042454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.042490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.042611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.042645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.042952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.042989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.043207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.043268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.043582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.043619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.043851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.043886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.044214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.044261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.527 [2024-12-10 14:31:21.044394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.527 [2024-12-10 14:31:21.044428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.527 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.044612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.044648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.044877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.044914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.045196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.045246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.045362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.045403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.045605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.045640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.045897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.045932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.046062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.046098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.046212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.046260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.046481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.046516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.046803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.046839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.047054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.047089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.047278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.047315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.047522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.047557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.047762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.047798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.048076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.048112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.048314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.048350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.048502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.048538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.048793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.048828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.049032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.049069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.049341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.049380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.049645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.049680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.049887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.049924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.050109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.050150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.050333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.050369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.050566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.528 [2024-12-10 14:31:21.050601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.528 qpair failed and we were unable to recover it. 00:29:20.528 [2024-12-10 14:31:21.050914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.050948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.051260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.051297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.051487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.051521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.051710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.051746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.051965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.052001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.052242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.052278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.052513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.052548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.052808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.052843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.053153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.053187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.053560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.053643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.053871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.053909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.054208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.054257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.054405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.054440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.054669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.054703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.054958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.054992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.055276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.055311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.055510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.055544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.055745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.055783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.055994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.056032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.056225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.056265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.056468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.056504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.056732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.056766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.057032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.057067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.057326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.057362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.057646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.057686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.058001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.058035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.529 [2024-12-10 14:31:21.058357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.529 [2024-12-10 14:31:21.058394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.529 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.058539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.058573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.058725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.058759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.059039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.059073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.059350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.059386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.059519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.059553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.059746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.059780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.060057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.060091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.060372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.060408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.060688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.060724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.061037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.061071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.061280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.061315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.061528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.061563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.061750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.061784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.062049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.062084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.062376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.062411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.062615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.062650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.062791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.062826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.063040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.063080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.063367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.063404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.063666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.063700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.063923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.063959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.064245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.064281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.064423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.064458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.064762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.064797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.065070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.065110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.065389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.065425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.065611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.065647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.530 [2024-12-10 14:31:21.065942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.530 [2024-12-10 14:31:21.065978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.530 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.066290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.066325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.066459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.066493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.066680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.066715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.066920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.066953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.067235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.067271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.067530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.067565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.067779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.067813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.068092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.068127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.068446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.068481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.068706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.068739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.069053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.069088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.069373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.069407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.069685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.069720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.069998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.070034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.070167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.070201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.070423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.070458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.070668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.070702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.070910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.070944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.071203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.071260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.071500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.071535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.071748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.071783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.071986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.072021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.072282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.531 [2024-12-10 14:31:21.072317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-12-10 14:31:21.072541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.072575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.072845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.072879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.073065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.073099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.073394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.073429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.073639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.073674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.073893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.073927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.074209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.074251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.074399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.074434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.074654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.074687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.074876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.074909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.075186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.075227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.075416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.075451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.075598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.075633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.075836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.075872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.076189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.076235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.076508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.076542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.076737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.076771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.077053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.077086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.077307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.077343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.077569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.077604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.077909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.077944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.078131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.078164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.078390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.078425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.078548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.078582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.078720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.078754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.079071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.079105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.079386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.079421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.079550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.532 [2024-12-10 14:31:21.079584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.532 qpair failed and we were unable to recover it. 00:29:20.532 [2024-12-10 14:31:21.079796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.079832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.080039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.080073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.080290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.080326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.080534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.080568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.080775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.080809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.081025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.081059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.081254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.081290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.081574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.081609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.081888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.081922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.082113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.082147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.082411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.082446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.082608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.082641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.082869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.082903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.083036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.083076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.083293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.083328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.083474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.083509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.083694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.083728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.083957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.083992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.084248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.084283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.084571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.084606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.084927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.084961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.085235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.085271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.085477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.085512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.085715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.085750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.085958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.085992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.086143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.086177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.086377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.086412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.086621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.086655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.533 qpair failed and we were unable to recover it. 00:29:20.533 [2024-12-10 14:31:21.086978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.533 [2024-12-10 14:31:21.087011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.087215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.087272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.087470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.087504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.087624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.087657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.087950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.087985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.088233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.088269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.088486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.088519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.088709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.088744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.088873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.088907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.089177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.089211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.089499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.089533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.089813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.089847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.090129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.090168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.090452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.090487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.090699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.090734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.090916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.090950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.091237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.091273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.091529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.091563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.091820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.091856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.092162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.092196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.092404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.092439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.092667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.092702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.093006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.093039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.093172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.093207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.093474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.093509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.093649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.093684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.093925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.093959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.094161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.094195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.094476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.534 [2024-12-10 14:31:21.094511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-12-10 14:31:21.094660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.094694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.094948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.094982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.095192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.095250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.095482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.095517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.095741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.095775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.096001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.096035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.096240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.096277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.096460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.096493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.096696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.096731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.096919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.096953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.097151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.097185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.097485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.097520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.097671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.097705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.097992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.098026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.098145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.098179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.098416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.098451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.098681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.098716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.098997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.099031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.099244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.099278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.099535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.099569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.099683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.099717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.100004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.100038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.100340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.100376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.100586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.100629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.100813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.100849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.101101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.101135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.101266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.101302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.101509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.535 [2024-12-10 14:31:21.101543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-12-10 14:31:21.101822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.101857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.102142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.102177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.102380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.102415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.102612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.102646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.102920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.102953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.103157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.103192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.103413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.103448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.103639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.103674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.103896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.103930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.104142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.104176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.104379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.104415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.104600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.104633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.104838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.104872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.105127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.105161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.105311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.105346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.105651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.105685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.105968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.106002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.106271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.106306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.106625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.106659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.106941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.106975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.107204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.107246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.107460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.107494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.107683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.107717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.107927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.107967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.108249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.108284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.108505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.108538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.108827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.108861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-12-10 14:31:21.109168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.536 [2024-12-10 14:31:21.109202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.109507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.109542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.109748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.109783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.110039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.110072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.110380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.110416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.110555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.110589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.110845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.110879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.111135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.111168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.111481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.111517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.111935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.111974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.112400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.112438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.112726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.112761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.112945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.112978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.113287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.113325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.113521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.113554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.113789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.113822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.114125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.114159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.114377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.114414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.114551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.114586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.114783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.114817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.115144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.115178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.537 [2024-12-10 14:31:21.115464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.537 [2024-12-10 14:31:21.115499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.537 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.115625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.115659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.115934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.115975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.116287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.116323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.116626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.116660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.116860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.116893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.117100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.117134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.117338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.117373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.117521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.117555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.117741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.117776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.117975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.118009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.118263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.118299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.118505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.118540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.118811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.118844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.119149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.119184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.119413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.119448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.119691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.119724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.119954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.119989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.120261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.120296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.120487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.120521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.120790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.120825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.121078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.121112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.121259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.121294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.121429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.121464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.121626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.121661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.121916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.121951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.122173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.122209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.122478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.122513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.538 [2024-12-10 14:31:21.122709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.538 [2024-12-10 14:31:21.122744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.538 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.122949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.122990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.123273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.123309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.123516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.123550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.123809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.123845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.123995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.124029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.124255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.124290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.124481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.124515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.124722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.124758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.124968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.125002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.125282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.125316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.125570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.125605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.125890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.125924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.126154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.126189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.126444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.126479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.126691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.126725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.126949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.126984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.127196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.127240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.127499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.127534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.127719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.127752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.127996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.128031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.128165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.128199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.128425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.128461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.128650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.128684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.128899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.128932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.129156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.129191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.129398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.129435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.129693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.539 [2024-12-10 14:31:21.129728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.539 qpair failed and we were unable to recover it. 00:29:20.539 [2024-12-10 14:31:21.129854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.129888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.130209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.130260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.130535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.130570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.130717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.130751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.130967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.131002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.131203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.131250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.131450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.131484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.131788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.131821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.132087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.132122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.132328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.132364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.132549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.132582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.132789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.132823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.133077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.133111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.133412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.133447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.133730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.133765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.134042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.134077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.134282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.134318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.134524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.134558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.134865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.134899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.135100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.135134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.135420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.135456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.135643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.135677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.135833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.135868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.136072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.136106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.136312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.136348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.136632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.136667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.136800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.136834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.540 qpair failed and we were unable to recover it. 00:29:20.540 [2024-12-10 14:31:21.137135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.540 [2024-12-10 14:31:21.137168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.137476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.137511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.137771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.137805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.138077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.138112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.138311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.138346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.138532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.138567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.138704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.138738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.138882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.138916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.139104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.139138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.139323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.139359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.139586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.139620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.139749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.139785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.140006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.140041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.140259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.140295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.140496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.140536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.140733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.140769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.140967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.141000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.141191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.141233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.141459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.141494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.141724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.141757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.141973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.142008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.142342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.142377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.142580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.142613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.142822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.142856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.143147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.143180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.143394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.143429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.143587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.143621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.143829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.143863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.144026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.144061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.144347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.144383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.144605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.144639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.144919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.144953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.145208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.145272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.145434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.145468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.145674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.541 [2024-12-10 14:31:21.145709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.541 qpair failed and we were unable to recover it. 00:29:20.541 [2024-12-10 14:31:21.145979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.146015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.146211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.146259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.146535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.146569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.146778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.146812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.146925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.146957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.147242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.147277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.147415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.147456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.147647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.147682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.147951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.147986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.148274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.148311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.148519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.148553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.148785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.148819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.149076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.149109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.149334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.149369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.149499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.149537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.149719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.149751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.150054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.150089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.150244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.150279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.150553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.150587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.150722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.150757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.150897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.150930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.151127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.151161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.151377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.151412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.151596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.151629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.151910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.151945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.152208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.152256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.152399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.152434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.152634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.152668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.152803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.152838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.153094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.153127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.153409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.153445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.153662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.153698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.153970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.154003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.154234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.154269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.154540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.154575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.154770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.154805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.154928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.154963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.155154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.155188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.155474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.155510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.542 qpair failed and we were unable to recover it. 00:29:20.542 [2024-12-10 14:31:21.155652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.542 [2024-12-10 14:31:21.155685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.155938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.155973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.156250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.156287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.159526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.159565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.159713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.159745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.159929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.159963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.160233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.160269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.160430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.160464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.160725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.160760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.160957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.160990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.161192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.161240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.161473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.161506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.161707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.161742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.162010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.162044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.162246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.162280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.162447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.162482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.162672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.162706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.162927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.162962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.163152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.163187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.163405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.163441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.163703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.163738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.163954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.163987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.164252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.164287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.164488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.164522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.164679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.164714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.164976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.165010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.165175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.165209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.165420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.165454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.165677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.165711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.165967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.166002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.166127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.166161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.543 [2024-12-10 14:31:21.166362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.543 [2024-12-10 14:31:21.166397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.543 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.166619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.166653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.166854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.166888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.167141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.167175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.167365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.167409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.167679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.167715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.167931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.167965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.168158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.168193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.168402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.168436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.168658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.168692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.168986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.169019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.169309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.169345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.169531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.169566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.169831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.169865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.169998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.170031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.170234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.170270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.170425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.170461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.170647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.170681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.170870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.170904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.171114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.171148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.171405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.171441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.171630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.171665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.171800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.171835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.172105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.172139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.172351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.172387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.172576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.172610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.172770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.172805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.172995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.173028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.173213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.173262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.173428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.173462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.173616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.173650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.173932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.173973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.174239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.174277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.174560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.174596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.174784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.174819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.174958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.174991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.175131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.175164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.175440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.175476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.175656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.175690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.175883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.544 [2024-12-10 14:31:21.175918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.544 qpair failed and we were unable to recover it. 00:29:20.544 [2024-12-10 14:31:21.176138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.176172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.176311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.176345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.176529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.176563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.176747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.176781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.176931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.176966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.177255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.177290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.177498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.177531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.177737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.177772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.178061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.178098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.178384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.178420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.178717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.178751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.179025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.179059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.179277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.179313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.179481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.179516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.179767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.179802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.180004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.180039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.180168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.180202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.180441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.180476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.180623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.180663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.180782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.180817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.180952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.180986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.181263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.181299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.181506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.181540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.181726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.181760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.182040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.182075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.182275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.182310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.182466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.182499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.182699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.182733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.182873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.182908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.183035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.183070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.183335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.183370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.183560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.183595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.183906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.183942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.184143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.184176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.184384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.184420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.184550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.184584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.184800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.184834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.185044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.185077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.185293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.185329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.545 [2024-12-10 14:31:21.185486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.545 [2024-12-10 14:31:21.185521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.545 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.185715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.185750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.185937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.185972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.186113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.186148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.186407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.186444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.186665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.186699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.186881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.186915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.187107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.187143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.187400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.187436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.187625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.187660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.187913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.187949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.188204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.188253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.188518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.188552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.188808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.188843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.189030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.189065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.189296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.189332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.189544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.189577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.189760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.189796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.189999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.190034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.190232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.190267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.190537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.190618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.190918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.190958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.191198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.191244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.191476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.191512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.191743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.191778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.191991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.192026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.192236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.192273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.192484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.192520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.192721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.192756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.192981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.193017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.193297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.193334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.193527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.193561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.193805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.193839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.194067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.194103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.194241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.194278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.194401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.194435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.194643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.194678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.194913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.194947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.195240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.195276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.546 [2024-12-10 14:31:21.195401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.546 [2024-12-10 14:31:21.195435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.546 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.195637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.195672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.195816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.195851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.195976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.196010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.196293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.196329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.196459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.196494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.196749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.196784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.196990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.197025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.197193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.197263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.197552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.197587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.197741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.197777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.198068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.198103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.198407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.198443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.198649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.198684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.199024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.199059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.199421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.199457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.199615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.199650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.199910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.199945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.200151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.200186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.200359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.200394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.200546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.200581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.200838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.200880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.201167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.201201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.201448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.201483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.201683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.201719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.201983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.202017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.202339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.202374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.202672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.202707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.203017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.203051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.203331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.203367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.203571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.203605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.203813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.203849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.204121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.204156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.204295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.204334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.204568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.204603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.547 [2024-12-10 14:31:21.204750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.547 [2024-12-10 14:31:21.204784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.547 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.204982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.205017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.205162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.205197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.205487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.205522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.205662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.205696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.205904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.205939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.206124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.206159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.206369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.206405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.206661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.206698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.206915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.206950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.207145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.207180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.207407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.207443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.207631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.207666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.207808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.207844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.208064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.208098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.208298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.208336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.208593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.208628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.208920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.208956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.209164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.209198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.209422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.209458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.209675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.209710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.209845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.209879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.210075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.210110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.210394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.210431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.210713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.210751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.210950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.210986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.211181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.211231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.211418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.211454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.211603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.211637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.211848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.211883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.212137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.212173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.212483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.212519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.212715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.212751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.212989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.213024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.213163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.213198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.213392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.213427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.213640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.213675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.213900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.213937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.214191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.214238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.548 qpair failed and we were unable to recover it. 00:29:20.548 [2024-12-10 14:31:21.214481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.548 [2024-12-10 14:31:21.214515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.214777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.214811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.215028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.215063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.215364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.215399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.215681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.215714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.216031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.216067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.216207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.216250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.216456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.216490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.216725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.216759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.216883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.216915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.217115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.217149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.217343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.217379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.217524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.217559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.217835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.217871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.218017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.218051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.218329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.218365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.218563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.218597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.218870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.218904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.219100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.219135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.219254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.219291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.219504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.219538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.219666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.219701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.219983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.220017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.220322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.220358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.220567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.220602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.220741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.220775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.220968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.221004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.221196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.221244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.221524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.221559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.221760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.221795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.222051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.222086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.222275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.222312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.222532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.222565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.222708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.222743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.223011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.223046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.223158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.223193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.223342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.223377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.223529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.223564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.549 qpair failed and we were unable to recover it. 00:29:20.549 [2024-12-10 14:31:21.223710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.549 [2024-12-10 14:31:21.223744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.223893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.223926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.224117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.224153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.224361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.224396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.224549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.224583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.224730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.224765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.225048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.225084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.225320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.225356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.225555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.225588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.225798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.225832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.226114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.226150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.226335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.226369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.226523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.226566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.226773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.226806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.227003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.227038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.227263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.227300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.227464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.227499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.227701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.227734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.227917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.227952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.228236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.228272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.228485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.228520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.228746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.228781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.228994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.229028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.229255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.229292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.229482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.229524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.229737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.229771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.230049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.230087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.230327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.230363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.230573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.230607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.230761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.230802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.230944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.230979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.231165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.231200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.231421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.231458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.231717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.231752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.231972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.232007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.232292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.232329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.232485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.232519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.232723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.232758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.233070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.550 [2024-12-10 14:31:21.233104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.550 qpair failed and we were unable to recover it. 00:29:20.550 [2024-12-10 14:31:21.233358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.551 [2024-12-10 14:31:21.233394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.551 qpair failed and we were unable to recover it. 00:29:20.551 [2024-12-10 14:31:21.233602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.551 [2024-12-10 14:31:21.233636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.551 qpair failed and we were unable to recover it. 00:29:20.551 [2024-12-10 14:31:21.233782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.551 [2024-12-10 14:31:21.233817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.551 qpair failed and we were unable to recover it. 00:29:20.551 [2024-12-10 14:31:21.234013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.551 [2024-12-10 14:31:21.234048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.551 qpair failed and we were unable to recover it. 00:29:20.551 [2024-12-10 14:31:21.234369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.551 [2024-12-10 14:31:21.234405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.551 qpair failed and we were unable to recover it. 00:29:20.551 [2024-12-10 14:31:21.234593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.551 [2024-12-10 14:31:21.234627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.551 qpair failed and we were unable to recover it. 00:29:20.551 [2024-12-10 14:31:21.234819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.551 [2024-12-10 14:31:21.234853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.551 qpair failed and we were unable to recover it. 00:29:20.551 [2024-12-10 14:31:21.235047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.551 [2024-12-10 14:31:21.235082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.551 qpair failed and we were unable to recover it. 00:29:20.551 [2024-12-10 14:31:21.235341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.551 [2024-12-10 14:31:21.235377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.551 qpair failed and we were unable to recover it. 00:29:20.551 [2024-12-10 14:31:21.235514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.551 [2024-12-10 14:31:21.235549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.551 qpair failed and we were unable to recover it. 00:29:20.551 [2024-12-10 14:31:21.235683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.551 [2024-12-10 14:31:21.235718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.551 qpair failed and we were unable to recover it. 00:29:20.551 [2024-12-10 14:31:21.235970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.551 [2024-12-10 14:31:21.236006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.551 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.236275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.236311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.236594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.236629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.236919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.236953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.237104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.237140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.237323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.237360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.237558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.237592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.237868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.237903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.238035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.238071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.238263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.238299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.238585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.238619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.238751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.238786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.238972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.239005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.239208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.239252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.239454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.239490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.239790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.239825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.240088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.240122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.240319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.240354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.240644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.240678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.240805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.240851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.241107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.828 [2024-12-10 14:31:21.241142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.828 qpair failed and we were unable to recover it. 00:29:20.828 [2024-12-10 14:31:21.241267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.241299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.241511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.241546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.241739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.241772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.242054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.242089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.242308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.242344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.242575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.242610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.242749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.242784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.242998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.243033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.243347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.243382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.243615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.243648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.243805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.243839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.244121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.244155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.244475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.244510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.244652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.244687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.244845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.244880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.245083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.245117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.245332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.245368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.245565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.245600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.245727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.245761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.246053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.246088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.246327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.246365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.246633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.246666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.246852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.246887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.247167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.247202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.247469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.247504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.247660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.247697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.247920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.247955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.248168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.248204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.248384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.248420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.248621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.248655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.248915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.248950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.249245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.249281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.249558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.249592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.249801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.249835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.250026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.250061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.250194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.250236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.250515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.250551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.250708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.250744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.250999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.251038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.829 qpair failed and we were unable to recover it. 00:29:20.829 [2024-12-10 14:31:21.251256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.829 [2024-12-10 14:31:21.251292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.251496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.251531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.251808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.251843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.251985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.252019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.252279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.252314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.252455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.252490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.252640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.252675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.252881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.252916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.253200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.253248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.253512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.253546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.253691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.253726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.253909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.253944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.254173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.254207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.254508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.254544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.254753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.254787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.254916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.254951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.255065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.255100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.255326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.255362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.255638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.255674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.255983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.256018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.256301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.256337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.256563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.256597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.256871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.256905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.257196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.257237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.257379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.257413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.257597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.257633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.257868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.257904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.258183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.258227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.258362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.258398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.258591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.258625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.258917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.258953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.259155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.259190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.259404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.259439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.259717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.259751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.259951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.259985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.260282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.260319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.260586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.260621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.260826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.260861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.261120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.261154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.830 qpair failed and we were unable to recover it. 00:29:20.830 [2024-12-10 14:31:21.261460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.830 [2024-12-10 14:31:21.261502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.261771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.261806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.261936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.261971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.262253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.262289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.262490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.262526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.262811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.262848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.263071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.263106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.263353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.263389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.263573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.263608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.263757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.263792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.263981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.264016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.264204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.264253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.264381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.264417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.264556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.264592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.264737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.264773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.264999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.265033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.265289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.265325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.265517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.265555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.265695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.265730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.265938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.265973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.266155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.266190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.266428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.266464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.266647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.266682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.266876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.266915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.267126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.267161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.267309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.267345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.267507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.267542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.267821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.267901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.268137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.268175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.268373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.268410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.268612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.268646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.268774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.268809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.269031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.269067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.269277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.269312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.269510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.269545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.269771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.269805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.269996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.270030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.270164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.270198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.270362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.270397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.831 [2024-12-10 14:31:21.270623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.831 [2024-12-10 14:31:21.270657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.831 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.270787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.270830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.271033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.271067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.271198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.271243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.271440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.271473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.271613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.271645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.271858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.271892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.272085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.272119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.272243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.272277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.272467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.272500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.272634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.272667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.272797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.272831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.272947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.272981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.273201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.273250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.273466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.273500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.273696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.273730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.273920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.273953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.274066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.274100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.274381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.274417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.274612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.274646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.274780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.274813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.274944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.274978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.275171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.275205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.275405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.275440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.275563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.275598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.275852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.275888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.276095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.276131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.276338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.276375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.276598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.276638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.276870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.276906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.277041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.277076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.277292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.277333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.277469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.277505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.277704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.277738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.832 [2024-12-10 14:31:21.277860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.832 [2024-12-10 14:31:21.277894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.832 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.278100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.278136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.278349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.278385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.278509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.278544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.278701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.278736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.278957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.278992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.279187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.279228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.279436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.279479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.279614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.279648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.279910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.279945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.280081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.280116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.280251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.280286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.280419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.280456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.280712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.280748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.280881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.280916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.281106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.281141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.281468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.281509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.281720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.281756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.282035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.282070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.282186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.282228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.282422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.282458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.282606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.282643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.282791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.282825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.283029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.283064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.283251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.283288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.283408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.283443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.283579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.283613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.283870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.283904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.284019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.284053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.284266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.284304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.284513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.284548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.284827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.284862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.284972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.285007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.285132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.285165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.285368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.285448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.285695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.285733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.285877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.285912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.286125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.286159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.286367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.286404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.833 qpair failed and we were unable to recover it. 00:29:20.833 [2024-12-10 14:31:21.286609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.833 [2024-12-10 14:31:21.286643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.286767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.286802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.286987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.287022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.287208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.287257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.287384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.287419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.287634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.287669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.287855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.287889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.288075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.288110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.288363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.288409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.288613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.288648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.288878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.288912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.289049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.289084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.289226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.289262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.289398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.289432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.289628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.289662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.289938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.289971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.290185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.290246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.290389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.290425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.290551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.290585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.290787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.290822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.290949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.290982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.291121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.291155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.291305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.291341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.291474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.291506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.291649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.291683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.291873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.291906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.292097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.292130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.292265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.292301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.292456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.292488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.292617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.292651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.292857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.292891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.293104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.293138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.293257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.293291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.293426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.293459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.293599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.293634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.293893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.293973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.294184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.294251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.294406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.294442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.294565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.294597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.834 [2024-12-10 14:31:21.294854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.834 [2024-12-10 14:31:21.294889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.834 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.295027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.295060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.295268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.295304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.295414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.295448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.295640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.295675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.295884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.295918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.296119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.296152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.296294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.296341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.296548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.296582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.296693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.296737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.296932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.296967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.297104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.297137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.297334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.297368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.297506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.297541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.297749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.297784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.298057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.298091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.298200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.298263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.298396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.298430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.298624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.298657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.298868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.298901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.299022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.299056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.299342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.299377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.299509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.299544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.299757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.299792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.299909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.299955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.300078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.300112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.300296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.300330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.300449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.300483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.300666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.300700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.300825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.300859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.300993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.301028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.301161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.301195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.301403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.301438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.301570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.301603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.301721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.301754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.301948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.301983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.302107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.302144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.302330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.302366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.302480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.302514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.302627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.835 [2024-12-10 14:31:21.302660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.835 qpair failed and we were unable to recover it. 00:29:20.835 [2024-12-10 14:31:21.302932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.302966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.303157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.303192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.303324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.303360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.303646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.303679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.303895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.303928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.304074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.304107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.304264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.304298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.304480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.304515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.304776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.304809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.304941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.304981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.305184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.305229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.305371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.305405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.305589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.305623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.305810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.305843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.306044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.306079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.306190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.306231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.306453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.306486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.306668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.306700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.306881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.306915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.307125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.307158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.307468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.307502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.307653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.307686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.307900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.307933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.308122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.308155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.308345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.308380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.308573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.308606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.308855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.308889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.309001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.309035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.309175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.309209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.309375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.309408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.309604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.309638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.309918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.309952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.310132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.310165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.310370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.310406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.310538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.310572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.310685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.310719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.310923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.310975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.311202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.311248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.311377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.836 [2024-12-10 14:31:21.311411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.836 qpair failed and we were unable to recover it. 00:29:20.836 [2024-12-10 14:31:21.311610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.311643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.311889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.311922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.312100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.312134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.312277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.312312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.312501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.312534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.312712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.312745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.312938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.312971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.313094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.313128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.313240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.313275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.313406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.313439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.313638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.313671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.313784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.313817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.314064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.314097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.314211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.314259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.314368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.314401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.314588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.314622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.314743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.314776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.314890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.314924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.315106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.315139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.315320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.315354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.315538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.315571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.315838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.315872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.316148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.316181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.316471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.316505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.316709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.316743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.316932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.316965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.317229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.317264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.317452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.317485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.317598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.317631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.317910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.317943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.318064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.318098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.318284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.318319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.318448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.318482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.318688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.318722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.318918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.837 [2024-12-10 14:31:21.318951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.837 qpair failed and we were unable to recover it. 00:29:20.837 [2024-12-10 14:31:21.319087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.319121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.319302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.319336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.319475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.319514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.319735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.319769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.319970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.320003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.320178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.320211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.320416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.320450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.320639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.320672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.320851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.320884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.321141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.321174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.321361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.321396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.321593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.321626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.321866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.321900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.322111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.322144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.322423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.322458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.322590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.322623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.322859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.322893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.323034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.323067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.323350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.323384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.323503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.323536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.323731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.323765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.323945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.323979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.324175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.324209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.324354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.324387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.324515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.324548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.324816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.324849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.325053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.325087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.325196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.325236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.325374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.325407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.325622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.325655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.325783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.325816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.326043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.326077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1815669 Killed "${NVMF_APP[@]}" "$@" 00:29:20.838 [2024-12-10 14:31:21.326254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.326291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.326485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.326518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.326721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.326755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.326883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.326916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:20.838 [2024-12-10 14:31:21.327049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.327083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 [2024-12-10 14:31:21.327264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.838 [2024-12-10 14:31:21.327300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.838 qpair failed and we were unable to recover it. 00:29:20.838 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:20.838 [2024-12-10 14:31:21.327425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.327460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.327583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.327617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:20.839 [2024-12-10 14:31:21.327805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.327852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.327978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.328012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.328126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.328160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.839 [2024-12-10 14:31:21.328369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.328405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.328518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.328552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.328747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.328780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.328958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.328991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.329121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.329155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.329305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.329340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.329609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.329642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.329838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.329871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.330051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.330084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.330291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.330325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.330541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.330574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.330711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.330743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.330921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.330954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.331150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.331182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.331375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.331409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.331592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.331625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.331762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.331795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.331943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.331977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.332178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.332213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.332485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.332520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.332697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.332730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.332852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.332882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.333033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.333064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.333173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.333209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.333360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.333390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.333508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.333539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.333716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.333749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.333867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.333900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.334149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.334183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.334402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.334439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.334629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.334662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.334788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.334822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.335008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.839 [2024-12-10 14:31:21.335042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.839 qpair failed and we were unable to recover it. 00:29:20.839 [2024-12-10 14:31:21.335247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.335285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1816419 00:29:20.840 [2024-12-10 14:31:21.335551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.335585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1816419 00:29:20.840 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:20.840 [2024-12-10 14:31:21.335860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.335895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.336026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.336059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1816419 ']' 00:29:20.840 [2024-12-10 14:31:21.336176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.336211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.336412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.336444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.840 [2024-12-10 14:31:21.336648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.336682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:20.840 [2024-12-10 14:31:21.336925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.336959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.840 [2024-12-10 14:31:21.337139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.337179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.337405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:20.840 [2024-12-10 14:31:21.337441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.337563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.337596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.840 [2024-12-10 14:31:21.337798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.337832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.338015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.338054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.338256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.338291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.338424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.338457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.338593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.338625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.338773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.338806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.338913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.338944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.339144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.339177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.339301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.339337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.339533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.339567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.339690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.339726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.339915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.339948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.340087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.340120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.340258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.340293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.340497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.340531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.340648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.340684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.340859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.340895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.341075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.341107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.341243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.341278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.341477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.341511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.341705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.341739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.341917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.341950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.342151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.342184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.342453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.840 [2024-12-10 14:31:21.342487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.840 qpair failed and we were unable to recover it. 00:29:20.840 [2024-12-10 14:31:21.342610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.342643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.342830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.342863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.342993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.343025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.343159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.343192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.343409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.343443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.343567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.343600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.343730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.343763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.343887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.343920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.344028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.344061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.344182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.344216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.344404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.344437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.344564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.344598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.344789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.344821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.345021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.345054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.345250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.345285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.345426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.345459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.345678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.345711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.345834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.345872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.346007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.346042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.346163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.346196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.346396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.346434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.346555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.346589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.346715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.346747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.346892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.346927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.347053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.347086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.347264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.347298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.347533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.347566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.347690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.347722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.347879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.347912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.348047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.348080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.348193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.348233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.348390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.348425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.348628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.348661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.348796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.348829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.841 [2024-12-10 14:31:21.348943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.841 [2024-12-10 14:31:21.348978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.841 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.349249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.349287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.349478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.349512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.349719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.349754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.349951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.349986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.350182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.350214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.350351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.350385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.350536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.350569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.350743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.350776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.350886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.350920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.351034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.351067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.351311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.351346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.351612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.351644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.351755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.351788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.351910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.351944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.352122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.352154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.352359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.352393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.352513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.352546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.352731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.352763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.353011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.353043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.353288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.353323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.353442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.353475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.353615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.353649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.353769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.353808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.354008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.354041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.354215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.354257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.354383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.354416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.354592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.354627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.354802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.354834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.354981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.355015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.355283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.355317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.355443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.355477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.355661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.355694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.355818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.355850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.355988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.356021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.356208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.356250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.356442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.356475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.356687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.356719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.356840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.356873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.357014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.842 [2024-12-10 14:31:21.357047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.842 qpair failed and we were unable to recover it. 00:29:20.842 [2024-12-10 14:31:21.357159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.357191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.357317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.357349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.357537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.357570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.357680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.357712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.357837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.357870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.357995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.358028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.358275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.358319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.358545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.358579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.358702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.358735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.358914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.358948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.359072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.359106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.359294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.359330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.359626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.359658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.359866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.359899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.360081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.360115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.360258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.360291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.360423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.360455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.360711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.360744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.360874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.360906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.361092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.361126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.361250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.361285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.361424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.361457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.361582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.361615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.361728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.361767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.361943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.361975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.362152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.362184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.362406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.362438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.362614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.362644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.362748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.362777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.362964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.362994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.363119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.363148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.363286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.363317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.363557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.363586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.363720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.363749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.363854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.363883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.364066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.364095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.364302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.364332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.364465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.364495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.364606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.843 [2024-12-10 14:31:21.364635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.843 qpair failed and we were unable to recover it. 00:29:20.843 [2024-12-10 14:31:21.364821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.364851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.364958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.364987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.365173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.365203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.365411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.365441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.365562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.365593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.365785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.365815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.365992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.366023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.366232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.366264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.366462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.366492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.366607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.366637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.366754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.366785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.366963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.366994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.367194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.367235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.367352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.367382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.367499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.367529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.367655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.367686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.367817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.367847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.368014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.368045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.368156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.368186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.368416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.368449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.368713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.368746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.368993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.369024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.369201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.369243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.369430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.369466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.369579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.369619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.369728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.369758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.369880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.369911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.370107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.370139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.370271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.370306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.370427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.370460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.370723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.370762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.370876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.370909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.371130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.371161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.371297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.371330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.371480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.371512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.371755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.371792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.371906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.371941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.372060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.372093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.372230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.372266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.844 [2024-12-10 14:31:21.372449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.844 [2024-12-10 14:31:21.372481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.844 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.372741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.372775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.372899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.372931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.373109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.373141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.373328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.373362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.373558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.373589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.373725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.373757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.373880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.373913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.374048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.374081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.374282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.374314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.374566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.374598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.374780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.374812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.374924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.374957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.375090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.375123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.375309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.375343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.375477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.375510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.375691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.375722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.375964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.375997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.376170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.376202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.376392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.376425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.376533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.376565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.376694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.376727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.376841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.376873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.376993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.377025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.377210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.377252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.377405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.377442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.377563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.377596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.377790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.377823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.377952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.377984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.378162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.378195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.378404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.378438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.378707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.378740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.378876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.378908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.379031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.379063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.379276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.379310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.379417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.379446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.379550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.379582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.379771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.379803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.379975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.380007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.845 [2024-12-10 14:31:21.380213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.845 [2024-12-10 14:31:21.380275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.845 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.380472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.380505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.380626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.380658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.380927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.380960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.381172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.381205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.381367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.381400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.381585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.381617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.381743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.381776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.381882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.381914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.382042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.382074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.382198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.382241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.382444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.382476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.382647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.382680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.382867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.382900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.383119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.383152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.383271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.383304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.383519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.383552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.383729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.383762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.383904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.383937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.384147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.384179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.384314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.384348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.384535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.384567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.384754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.384787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.384897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.384929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.385074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.385107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.385283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.385318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.385495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.385533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.385646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.385679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.385848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.385879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.386113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.386145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.386389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.386422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.386531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.386563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.386754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.386787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.386959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.386991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.387204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.387248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.387425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.846 [2024-12-10 14:31:21.387458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.846 qpair failed and we were unable to recover it. 00:29:20.846 [2024-12-10 14:31:21.387630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.387661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.387852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.387885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.388072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.388105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.388299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.388332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.388469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.388502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.388677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.388709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.388896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.388928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.389111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.389143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.389332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.389367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.389555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.389587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.389785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.389817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.389989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.390022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.390137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.390170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.390392] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:29:20.847 [2024-12-10 14:31:21.390442] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.847 [2024-12-10 14:31:21.390458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.390492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.390610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.390640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.390790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.390820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.391089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.391120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.391337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.391368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.391488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.391519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.391708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.391741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.391983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.392015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.392201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.392244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.392492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.392527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.392660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.392694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.392803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.392845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.393122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.393155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.393289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.393324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.393447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.393481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.393612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.393655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.393975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.394050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.394315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.394355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.394474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.394509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.394623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.394657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.394789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.394823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.395022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.395056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.395181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.395214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.395401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.395434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.395605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.847 [2024-12-10 14:31:21.395639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.847 qpair failed and we were unable to recover it. 00:29:20.847 [2024-12-10 14:31:21.395759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.395792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.395944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.395977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.396192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.396237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.396343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.396376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.396631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.396665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.396868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.396901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.397173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.397207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.397346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.397380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.397594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.397629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.397741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.397774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.397900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.397932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.398115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.398148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.398277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.398312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.398499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.398532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.398653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.398687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.398926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.398959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.399090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.399123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.399252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.399288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.399421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.399455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.399628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.399662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.399928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.399960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.400142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.400175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.400363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.400396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.400582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.400615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.400812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.400845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.401043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.401077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.401185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.401226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.401416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.401448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.401628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.401662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.401796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.401828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.402010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.402044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.402242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.402283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.402490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.402524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.402641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.402674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.402801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.402835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.402939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.402973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.403235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.403270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.403535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.403569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.403745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.403778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.848 qpair failed and we were unable to recover it. 00:29:20.848 [2024-12-10 14:31:21.403901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.848 [2024-12-10 14:31:21.403933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.404038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.404072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.404274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.404308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.404486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.404518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.404718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.404751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.404894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.404927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.405043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.405076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.405250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.405283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.405473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.405505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.405735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.405769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.406045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.406079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.406201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.406248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.406526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.406559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.406850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.406884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.407001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.407036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.407165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.407197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.407407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.407440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.407554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.407585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.407776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.407810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.407993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.408027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.408232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.408268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.408460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.408493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.408612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.408644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.408765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.408798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.408970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.409002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.409176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.409232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.409366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.409400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.409614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.409648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.409754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.409786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.410029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.410063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.410171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.410204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.410431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.410466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.410577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.410616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.410735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.410767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.410893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.410926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.411051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.411084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.411265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.411302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.411495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.411528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.411703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.411738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.849 [2024-12-10 14:31:21.411956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.849 [2024-12-10 14:31:21.411993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.849 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.412179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.412213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.412341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.412375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.412633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.412671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.412852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.412886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.413064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.413118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.413389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.413425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.413615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.413649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.413844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.413879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.414057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.414099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.414345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.414382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.414576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.414611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.414747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.414787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.414984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.415015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.415282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.415324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.415615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.415652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.415905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.415940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.416072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.416107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.416283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.416320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.416505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.416542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.416811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.416849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.417047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.417083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.417258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.417297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.417477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.417510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.417640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.417672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.417793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.417825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.417959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.417991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.418111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.418146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.418262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.418294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.418418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.418450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.418565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.418600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.418810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.418843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.419018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.419051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.419185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.419236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.419343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.419377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.850 qpair failed and we were unable to recover it. 00:29:20.850 [2024-12-10 14:31:21.419547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.850 [2024-12-10 14:31:21.419579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.419798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.419832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.420049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.420082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.420194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.420238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.420381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.420416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.420588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.420622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.420756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.420789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.421029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.421062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.421318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.421353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.421524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.421557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.421657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.421691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.421899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.421931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.422091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.422124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.422235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.422271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.422400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.422432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.422622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.422655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.422878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.422910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.423016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.423048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.423242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.423276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.423395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.423427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.423533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.423565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.423772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.423804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.423938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.423969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.424211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.424254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.424392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.424425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.424683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.424717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.424832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.424866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.425058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.425097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.425238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.425271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.425457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.425488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.425610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.425644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.425914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.425948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.426133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.426166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.426373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.426408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.426539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.426570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.426834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.426869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.427046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.427078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.427256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.427290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.427480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.427520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.851 [2024-12-10 14:31:21.427707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.851 [2024-12-10 14:31:21.427739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.851 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.427873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.427914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.428186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.428227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.428438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.428471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.428665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.428698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.428833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.428865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.429041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.429073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.429247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.429281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.429462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.429495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.429611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.429643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.429834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.429867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.429980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.430012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.430211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.430254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.430437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.430469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.430587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.430617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.430839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.430874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.431065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.431098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.431280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.431315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.431427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.431458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.431724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.431755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.431883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.431915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.432112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.432144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.432334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.432367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.432550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.432583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.432690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.432721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.432923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.432957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.433164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.433196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.433400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.433434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.433637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.433669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.433867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.433899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.434112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.434145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.434329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.434364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.434636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.434670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.434790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.434823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.435091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.435123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.435310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.435346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.435472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.435503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.435619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.435650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.435846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.435879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.436059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.852 [2024-12-10 14:31:21.436097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.852 qpair failed and we were unable to recover it. 00:29:20.852 [2024-12-10 14:31:21.436276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.436310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.436450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.436481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.436722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.436754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.436942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.436974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.437094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.437126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.437332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.437366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.437507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.437538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.437755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.437788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.437895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.437926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.438124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.438157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.438295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.438327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.438520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.438551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.438687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.438720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.438850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.438883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.439012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.439043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.439235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.439270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.439387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.439420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.439552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.439584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.439763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.439795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.439913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.439946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.440142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.440174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.440378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.440411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.440520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.440551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.440680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.440713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.440830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.440862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.440989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.441022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.441234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.441268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.441468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.441501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.441621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.441653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.441857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.441889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.442013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.442045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.442157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.442188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.442385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.442457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.442608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.442645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.442758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.442792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.443034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.443067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.443191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.443241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.443419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.443454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.443661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.853 [2024-12-10 14:31:21.443693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.853 qpair failed and we were unable to recover it. 00:29:20.853 [2024-12-10 14:31:21.443803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.443846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.444112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.444145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.444320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.444355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.444470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.444502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.444616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.444649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.444766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.444799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.444997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.445030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.445162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.445195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.445317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.445351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.445526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.445558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.445688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.445721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.445930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.445963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.446091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.446124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.446263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.446298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.446436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.446470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.446578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.446611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.446711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.446743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.446855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.446887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.447067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.447100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.447300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.447333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.447443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.447476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.447605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.447637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.447753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.447786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.447901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.447933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.448056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.448089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.448289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.448323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.448435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.448467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.448599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.448631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.448753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.448786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.448988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.449021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.449199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.449243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.449361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.449393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.449515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.449547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.450963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.451018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.451152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.451186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.854 [2024-12-10 14:31:21.451377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.854 [2024-12-10 14:31:21.451412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.854 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.451535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.451567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.451692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.451726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.451835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.451868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.452107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.452139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.452257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.452292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.452518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.452592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.452790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.452828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.452955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.452988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.453112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.453145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.453275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.453308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.453428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.453461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.453588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.453619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.453798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.453830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.454022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.454055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.454181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.454213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.454355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.454387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.454497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.454531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.454655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.454686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.454809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.454852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.454975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.455006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.455121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.455152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.455268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.455303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.455484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.455516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.455620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.455652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.455782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.455817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.455941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.455972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.456098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.456130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.456307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.456341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.456474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.456506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.456688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.456721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.456965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.456999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.457110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.457145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.457281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.457314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.457443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.457475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.457664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.457697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.457876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.457909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.458028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.458060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.458168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.458200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.855 [2024-12-10 14:31:21.458338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.855 [2024-12-10 14:31:21.458372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.855 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.458615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.458648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.458760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.458792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.458926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.458961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.459167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.459200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.459396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.459430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.459546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.459579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.459710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.459749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.459870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.459904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.460020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.460052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.460175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.460207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.460402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.460435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.460672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.460704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.460833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.460866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.461049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.461080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.461255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.461290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.461408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.461440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.461569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.461601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.461854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.461889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.462075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.462110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.462257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.462291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.462410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.462442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.462643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.462674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.462862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.462894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.463071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.463103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.463287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.463321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.463498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.463531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.463654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.463687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.463796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.463829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.463959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.463992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.464174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.464207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.464392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.464425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.464600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.464633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.464818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.464867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.465049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.465096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.465212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.465259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.465388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.465422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.465556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.465589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.465692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.465725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.465849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.465882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.856 qpair failed and we were unable to recover it. 00:29:20.856 [2024-12-10 14:31:21.466005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.856 [2024-12-10 14:31:21.466038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.466213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.466258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.466394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.466427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.466601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.466644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.466824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.466857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.466983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.467015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.467190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.467232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.467375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.467407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.467558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.467591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.467799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.467831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.467997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.468029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.468239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.468273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.468401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.468435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.468616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.468649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.468766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.468799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.468914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.468947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.469149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.469185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.469374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.469407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.469533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.469565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.469741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.469773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.469882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.469915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.470087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.470127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.470306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.470341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.470607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.470640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.470768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.470801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.471054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.471087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.471279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.471312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.471525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.471558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.471688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.471721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.471835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.471867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.471971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.472005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.472207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.472252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.472372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.472405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.472602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.472634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.472827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.472860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.473046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.473080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.473184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.473227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.473484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.473518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.473629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.473661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.473865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-12-10 14:31:21.473898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-12-10 14:31:21.474073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.474105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.474237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.474272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.474536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.474569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.474780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.474812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.474991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.475024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.475293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.475327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.475576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.475608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.475796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.475829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.476007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.476040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.476171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.476203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.476335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.476367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.476607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.476640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.476815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.476847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.477019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.477052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.477235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.477270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.477449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.477481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.477675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.477707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.477890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.477922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.478030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.478062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.478189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.478239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.478419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.478452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.478569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.478601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.478758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.478827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.479030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.479066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.479253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.479290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.479473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.479506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.479690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.479723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.479897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.479930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.480117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.480150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.480269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.480305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.480482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.480497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:20.858 [2024-12-10 14:31:21.480514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.480718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.480751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.481036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.481069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.481246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.481280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.481418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.481450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.481632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.481670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.481802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.481836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.482018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.482052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.482247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-12-10 14:31:21.482280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-12-10 14:31:21.482407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.482440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.482635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.482669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.482804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.482836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.482945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.482978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.483112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.483144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.483340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.483375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.483576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.483609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.483725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.483757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.483871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.483904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.484078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.484117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.484333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.484367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.484498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.484530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.484770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.484804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.484981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.485014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.485122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.485155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.485287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.485322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.485450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.485483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.485629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.485663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.485891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.485924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.486100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.486133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.486256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.486290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.486406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.486441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.486568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.486602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.486795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.486828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.486999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.487033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.487142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.487176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.487306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.487340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.487600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.487634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.487905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.487939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.488118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.488151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.488295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.488329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.488517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.488551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.488674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.488708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.488848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.488882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.488988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-12-10 14:31:21.489021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-12-10 14:31:21.489193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.489235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.489373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.489412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.489542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.489575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.489846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.489881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.490073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.490107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.490237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.490272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.490402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.490435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.490632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.490666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.490781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.490816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.490939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.490972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.491112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.491147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.491403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.491440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.491608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.491642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.491837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.491872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.491988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.492022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.492336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.492370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.492545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.492577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.492786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.492820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.492987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.493020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.493146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.493179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.493319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.493352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.493531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.493563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.493825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.493857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.493976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.494009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.494190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.494235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.494407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.494440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.494561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.494595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.494879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.494912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.495041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.495075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.495257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.495291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.495473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.495506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.495618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.495649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.495775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.495808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.495982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.496015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.496132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.496164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.496371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.496405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.496535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.496568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.496685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.496717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.496843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.860 [2024-12-10 14:31:21.496875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.860 qpair failed and we were unable to recover it. 00:29:20.860 [2024-12-10 14:31:21.497065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.497099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.497228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.497262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.497384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.497422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.497593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.497627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.497836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.497869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.497982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.498016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.498191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.498233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.498427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.498460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.498730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.498763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.498954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.498987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.499111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.499143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.499325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.499360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.499543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.499580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.499694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.499727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.499917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.499950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.500136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.500169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.500325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.500361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.500562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.500595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.500786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.500828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.500963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.500997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.501132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.501166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.501305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.501339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.501516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.501550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.501653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.501687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.501790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.501823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.501939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.501972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.502210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.502262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.502453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.502487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.502612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.502645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.502875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.502909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.503083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.503115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.503241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.503276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.503481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.503514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.503697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.503730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.503835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.503868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.503989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.504023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.504212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.504254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.504481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.504517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.504761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.504793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.861 [2024-12-10 14:31:21.504996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.861 [2024-12-10 14:31:21.505029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.861 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.505284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.505319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.505494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.505531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.505737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.505777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.505901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.505934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.506197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.506239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.506370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.506405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.506578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.506611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.506790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.506822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.506950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.506983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.507169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.507204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.507421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.507454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.507634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.507666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.507837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.507870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.508000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.508032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.508153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.508185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.508253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241b460 (9): Bad file descriptor 00:29:20.862 [2024-12-10 14:31:21.508549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.508620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.508862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.508911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.509122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.509157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.509345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.509382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.509584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.509617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.509858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.509892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.510016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.510050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.510248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.510284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.510493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.510526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.510813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.510847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.511116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.511150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.511327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.511363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.511502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.511536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.511779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.511812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.511926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.511960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.512174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.512209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.512397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.512430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.512669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.512704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.512917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.512950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.513136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.513170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.513419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.513454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.513646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.513679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.513860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.513894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.862 [2024-12-10 14:31:21.514083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.862 [2024-12-10 14:31:21.514117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.862 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.514237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.514272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.514454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.514487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.514769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.514806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.514923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.514962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.515063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.515097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.515288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.515324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.515541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.515574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.515711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.515746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.516013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.516047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.516235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.516269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.516381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.516415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.516531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.516564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.516674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.516708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.516832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.516866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.517133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.517167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.517292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.517327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.517447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.517481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.517664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.517697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.517959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.517993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.518115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.518150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.518337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.518373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.518499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.518533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.518749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.518783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.519024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.519059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.519238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.519274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.519447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.519493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.519630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.519664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.519799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.519833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.520010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.520044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.520160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.520194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 [2024-12-10 14:31:21.520194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.863 [2024-12-10 14:31:21.520226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.520234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.863 [2024-12-10 14:31:21.520241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.863 [2024-12-10 14:31:21.520247] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.863 [2024-12-10 14:31:21.520394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.520431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.520550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.520582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.520701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.863 [2024-12-10 14:31:21.520733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.863 qpair failed and we were unable to recover it. 00:29:20.863 [2024-12-10 14:31:21.520926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.520959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.521139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.521172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.521371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.521405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.521513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.521546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.521659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.521693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.521796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.521830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.521784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:20.864 [2024-12-10 14:31:21.521892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:20.864 [2024-12-10 14:31:21.521997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:20.864 [2024-12-10 14:31:21.522007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.522039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.521998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:20.864 [2024-12-10 14:31:21.522175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.522231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.522521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.522558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.522803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.522838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.523045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.523079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.523286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.523324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.523519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.523553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.523725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.523760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.523895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.523930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.524179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.524213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.524418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.524453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.524646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.524682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.524801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.524836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.525082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.525116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.525320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.525356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.525516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.525551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.525737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.525771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.526016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.526050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.526237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.526271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.526398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.526432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.526540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.526573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.526747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.526781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.526918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.526951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.527076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.527111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.527240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.527275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.527433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.527468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.527590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.527623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.527822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.527855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.528038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.528082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.528260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.528295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.528467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.528501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.528758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.864 [2024-12-10 14:31:21.528793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.864 qpair failed and we were unable to recover it. 00:29:20.864 [2024-12-10 14:31:21.528963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.528997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.529186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.529230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.529410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.529443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.529577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.529610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.529850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.529884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.530057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.530091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.530334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.530370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.530502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.530536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.530728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.530763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.530880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.530915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.531040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.531075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.531186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.531232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.531367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.531400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.531513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.531547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.531726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.531761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.531987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.532020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.532232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.532268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.532476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.532511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.532644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.532677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.532821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.532855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.533095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.533129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.533369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.533404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.533508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.533542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.533736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.533772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.533947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.533981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.534254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.534291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.534404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.534438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.534557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.534592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.534774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.534807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.534989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.535023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.535191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.535234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.535434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.535467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.535658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.535693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.535882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.535916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.536093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.536127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.536305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.536341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.536443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.536485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.536620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.536654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.536908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.536942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.865 qpair failed and we were unable to recover it. 00:29:20.865 [2024-12-10 14:31:21.537155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.865 [2024-12-10 14:31:21.537190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.537333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.537368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.537556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.537590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.537880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.537915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.538046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.538079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.538252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.538286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.538402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.538436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.538564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.538598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.538750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.538784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.538964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.538998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.539195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.539241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.539435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.539468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.539735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.539769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.539903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.539937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.540067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.540100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.540363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.540399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.540667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.540701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.540944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.540978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.541088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.541121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.541245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.541280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.541396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.541429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.541620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.541654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.541827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.541861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.542035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.542068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.542273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.542310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.542428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.542462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.542725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.542758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.542941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.542975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.543152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.543187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.543403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.543466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.543623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.543670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.543781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.543814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.543983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.544015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.544205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.544254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.544458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.544491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.544758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.544791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.544915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.544948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.545057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.545107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.545251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.545287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.866 qpair failed and we were unable to recover it. 00:29:20.866 [2024-12-10 14:31:21.545471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.866 [2024-12-10 14:31:21.545504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:20.867 [2024-12-10 14:31:21.545679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.867 [2024-12-10 14:31:21.545712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:20.867 [2024-12-10 14:31:21.545907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.867 [2024-12-10 14:31:21.545939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:20.867 [2024-12-10 14:31:21.546072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.867 [2024-12-10 14:31:21.546105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:20.867 [2024-12-10 14:31:21.546310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.867 [2024-12-10 14:31:21.546343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:20.867 [2024-12-10 14:31:21.546467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.867 [2024-12-10 14:31:21.546500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:20.867 [2024-12-10 14:31:21.546621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.867 [2024-12-10 14:31:21.546653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:20.867 [2024-12-10 14:31:21.546780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.867 [2024-12-10 14:31:21.546813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:20.867 [2024-12-10 14:31:21.547031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.867 [2024-12-10 14:31:21.547063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:20.867 [2024-12-10 14:31:21.547239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.867 [2024-12-10 14:31:21.547273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:20.867 [2024-12-10 14:31:21.547393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.867 [2024-12-10 14:31:21.547425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:20.867 [2024-12-10 14:31:21.547621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.867 [2024-12-10 14:31:21.547654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:20.867 [2024-12-10 14:31:21.547784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.867 [2024-12-10 14:31:21.547817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:20.867 [2024-12-10 14:31:21.547991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.867 [2024-12-10 14:31:21.548023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:20.867 [2024-12-10 14:31:21.548235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.867 [2024-12-10 14:31:21.548270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:20.867 [2024-12-10 14:31:21.548406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.867 [2024-12-10 14:31:21.548439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:20.867 [2024-12-10 14:31:21.548555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.867 [2024-12-10 14:31:21.548586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:20.867 [2024-12-10 14:31:21.548772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.867 [2024-12-10 14:31:21.548805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:20.867 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.548982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.549015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.549196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.549239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.549351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.549383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.549580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.549612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.549731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.549764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.549868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.549902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.550076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.550110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.550370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.550416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.550548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.550581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.550715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.550748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.550990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.551023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.551202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.551247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.551359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.551391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.551511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.551544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.551727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.551760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.552010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.552043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.552235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.552271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.552548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.552581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.552706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.552738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.552925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.552957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.553084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.553124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.553387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.553422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.553612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.553644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.553766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.553799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.553993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.554026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.554135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.554167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.554365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.554399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.554570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.554603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.554784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.554817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.554947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.554978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.555106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.555140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.555262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.555294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.555538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.555570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.555758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.140 [2024-12-10 14:31:21.555790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.140 qpair failed and we were unable to recover it. 00:29:21.140 [2024-12-10 14:31:21.556009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.556041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.556170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.556203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.556401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.556434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.556630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.556664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.556804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.556837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.556958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.556989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.557191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.557236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.557357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.557390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.557638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.557671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.557773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.557805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.557977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.558010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.558251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.558285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.558402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.558435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.558687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.558725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.558903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.558934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.559107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.559139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.559312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.559347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.559461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.559494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.559757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.559789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.559974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.560007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.560191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.560232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.560436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.560470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.560728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.560761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.560890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.560922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.561104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.561137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.561317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.561351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.561541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.561580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.561757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.561790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.562071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.562103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.562343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.562378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.562500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.562532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.562722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.562754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.562868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.562900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.563144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.563176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.563426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.563461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.563625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.563658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.563842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.563875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.564059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.564094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.564296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.564332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.141 qpair failed and we were unable to recover it. 00:29:21.141 [2024-12-10 14:31:21.564484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.141 [2024-12-10 14:31:21.564518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.564711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.564745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.564914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.564947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.565074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.565109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.565244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.565279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.565457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.565492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.565677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.565711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.565841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.565876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.566122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.566156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.566351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.566387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.566566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.566599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.566762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.566798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.566898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.566934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.567065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.567098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.567317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.567370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.567552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.567589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.567836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.567870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.567980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.568014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.568203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.568252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.568431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.568464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.568578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.568612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.568739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.568773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.568961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.568995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.569240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.569277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.569482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.569517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.569757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.569790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.569982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.570016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.570235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.570270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.570406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.570441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.570631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.570664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.570787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.570821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.571032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.571066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.571180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.571214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.571397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.571431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.571621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.571654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.571824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.571859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.572155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.572188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.572377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.572413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.572611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.572646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.572765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.142 [2024-12-10 14:31:21.572799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.142 qpair failed and we were unable to recover it. 00:29:21.142 [2024-12-10 14:31:21.572987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.573023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.573209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.573263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.573381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.573415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.573649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.573686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.573887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.573924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.574100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.574136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.574320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.574359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.574550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.574584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.574698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.574732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.574859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.574893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.575143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.575178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.575395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.575430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.575608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.575642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.575778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.575811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.575997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.576031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.576294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.576329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.576557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.576590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.576798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.576833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.577097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.577131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.577321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.577356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.577491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.577527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.577769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.577803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.577939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.577973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.578079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.578113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.578292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.578328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.578505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.578539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.578780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.578814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.579029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.579063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.579240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.579287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.579512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.579545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.579810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.579843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.580062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.580096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.580286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.580320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.580442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.580475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.580654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.580686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.580895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.580928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.581173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.581206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.581437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.581470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.581709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.581742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.143 qpair failed and we were unable to recover it. 00:29:21.143 [2024-12-10 14:31:21.581872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.143 [2024-12-10 14:31:21.581905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.582024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.582056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.582161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.582195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.582412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.582446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.582657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.582690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.582865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.582898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.583067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.583100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.583273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.583308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.583483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.583517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.583709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.583742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.583918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.583951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.584073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.584106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.584321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.584354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.584597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.584631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.584735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.584767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.585027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.585061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.585195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.585250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.585424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.585456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.585559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.585592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.585767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.585799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.585996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.586029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.586141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.586174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.586436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.586471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.586748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.586781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.586954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.586987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.587122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.587155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.587340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.587374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.587561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.587594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.587701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.587734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.587984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.588017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.588235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.588292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.588572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.588605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.588798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.588829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.589078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.144 [2024-12-10 14:31:21.589110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.144 qpair failed and we were unable to recover it. 00:29:21.144 [2024-12-10 14:31:21.589295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.589330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.589506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.589539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.589714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.589747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.590038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.590071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.590201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.590244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.590513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.590546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.590732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.590764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.591027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.591060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.591293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.591327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.591500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.591540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.591729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.591762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.591954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.591987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.592159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.592192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.592442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.592475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.592599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.592632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.592921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.592954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.593229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.593264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.593448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.593481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.593682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.593714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.593835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.593868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.593988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.594021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.594260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.594296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.594428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.594461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.594587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.594621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.594751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.594784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.594958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.594991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.595167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.595199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.595412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.595446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.595643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.595675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.595859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.595892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.596029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.596063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.596309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.596343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.596533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.596566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.596692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.596725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.596842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.596874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.597047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.597080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.597398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.597449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.597730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.597764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.597982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.598014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.145 [2024-12-10 14:31:21.598146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.145 [2024-12-10 14:31:21.598179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.145 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.598365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.598397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.598576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.598608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.598794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.598826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.598930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.598962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.599094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.599127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.599362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.599397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.599530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.599562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.599690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.599723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.599898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.599931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.600056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.600103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.600366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.600400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.600586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.600619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.600796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.600829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.601005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.601038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.601248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.601283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.601418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.601451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.601625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.601657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.601926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.601958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.602166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.602199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.602403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.602439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.602699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.602731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.602847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.602880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.603072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.603105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.603295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.603330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.603579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.603612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.603825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.603858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.604035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.604067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.604261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.604295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.604534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.604568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.604760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.604792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.604933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.604966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.605087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.605120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.605316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.605350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.605532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.605565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.605770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.605803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.605973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.606006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.606150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.606196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.606393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.606427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.146 [2024-12-10 14:31:21.606603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.146 [2024-12-10 14:31:21.606635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.146 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.606850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.606881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.606988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.607021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.607205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.607248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.607440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.607473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.607577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.607610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.607831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.607863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.608070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.608109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.608320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.608383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.608578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.608611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.608891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.608924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.609049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.609088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.609305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.609339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.609623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.609656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.609901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.609935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.610173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.610206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.610417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.610451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.610645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.610679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.610868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.610900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.611087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.611120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.611304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.611339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.611548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.611581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.611697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.611730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.611912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.611946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.612133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.612165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.612358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.612393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.612683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.612716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.612821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.612854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.613027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.613060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.613327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.613362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.613534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.613567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.613704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.613737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.613921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.613954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.614074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.614107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.614309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.614343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.614582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.614615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.614719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.614752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.614964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.614997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.615199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.615253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.615402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.147 [2024-12-10 14:31:21.615436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.147 qpair failed and we were unable to recover it. 00:29:21.147 [2024-12-10 14:31:21.615709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.615743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.615858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.615891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.616080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.616113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.616310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.616346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.616544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.616578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.616703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.616736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.616912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.616945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.617115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.617149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.617415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.617450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.617640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.617674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.617852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.617885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.618023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.618057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.618262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.618298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.618473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.618506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.618707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.618741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.618951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.618984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.619167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.619200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.619394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.619428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.619550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.619584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.619701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.619733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.619868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.619901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.620093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.620127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.620302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.620337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.620513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.620546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.620674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.620708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.620837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.620876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.621065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.621098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.621329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.621364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.621607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.621639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.621812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.621845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.622085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.622118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.622257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.622291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.622416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.622449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.622653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.622686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.622819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.622852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.148 [2024-12-10 14:31:21.623094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.148 [2024-12-10 14:31:21.623127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.148 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.623326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.623360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.623475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.623508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.623702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.623734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.623931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.623964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.624148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.624181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.624390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.624427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.624611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.624643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.624874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.624907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.625026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.625059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.625240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.625275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.625460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.625493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.625595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.625628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.149 [2024-12-10 14:31:21.625816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.625852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.625979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.626013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:21.149 [2024-12-10 14:31:21.626262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.626299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.626498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.626531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.626717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.626750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:21.149 [2024-12-10 14:31:21.626941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.626976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.627094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.627128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.149 [2024-12-10 14:31:21.627250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.627286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.627525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.627559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.627691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.627724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.627859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.627892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.628024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.628057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.628315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.628349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.628460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.628493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.628762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.628796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.628900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.628934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.629062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.629096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.629371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.629406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.629652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.629686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.629935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.629969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.630158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.630190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.630383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.630418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.630581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.630615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.630829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.630862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.149 [2024-12-10 14:31:21.631036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.149 [2024-12-10 14:31:21.631070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.149 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.631329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.631364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.631622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.631655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.631844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.631877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.631997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.632031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.632234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.632275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.632386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.632419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.632546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.632580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.632758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.632791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.632967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.633000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.633129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.633162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.633417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.633453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.633648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.633681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.633865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.633898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.634074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.634107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.634212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.634268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.634443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.634477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.634688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.634722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.634958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.634991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.635134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.635168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.635377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.635411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.635524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.635558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.635683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.635716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.635908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.635941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.636057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.636095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.636237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.636272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.636457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.636490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.636608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.636641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.636816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.636850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.637038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.637071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.637196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.637241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.637422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.637457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.637570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.637609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.637784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.637818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.637942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.637976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.638105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.638140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.638330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.638367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.638481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.638516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.638644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.638676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.638892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.638926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.150 qpair failed and we were unable to recover it. 00:29:21.150 [2024-12-10 14:31:21.639099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.150 [2024-12-10 14:31:21.639133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.639256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.639291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.639418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.639450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.639552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.639585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.639825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.639858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.640002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.640035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.640289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.640333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.640462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.640496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.640693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.640726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.640919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.640952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.641141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.641174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.641303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.641338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.641512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.641545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.641650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.641683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.641786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.641819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.641998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.642032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.642165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.642198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.642483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.642517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.642642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.642675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.642801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.642840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.642963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.642996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.643130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.643162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.643289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.643323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.643527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.643560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.643756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.643789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.643966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.643999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.644121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.644154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.644270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.644303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.644511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.644543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.644661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.644694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.644883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.644915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.645020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.645053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.645178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.645216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.645343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.645377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.645637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.645670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.645790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.645823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.646031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.646063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.646257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.646292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.646477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.646512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.151 [2024-12-10 14:31:21.646627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.151 [2024-12-10 14:31:21.646660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.151 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.646847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.646880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.647073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.647106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.647290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.647324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.647507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.647541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.647671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.647704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.647949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.647982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.648171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.648210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.648422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.648457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.648557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.648590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.648706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.648739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.648936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.648969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.649103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.649141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.649315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.649353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.649491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.649524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.649651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.649684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.649793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.649827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.649950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.649983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.650105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.650141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.650320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.650355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.650462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.650501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.650617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.650651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.650785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.650817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.650933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.650966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.651148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.651183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.651383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.651420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.651552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.651586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.651698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.651731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.651997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.652031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.652146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.652179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.652387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.652423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.652543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.652577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.652693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.652725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.652847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.652881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.653024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.653057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.653237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.653271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.653394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.653427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.653546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.653579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.653715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.653748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.653859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.152 [2024-12-10 14:31:21.653892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.152 qpair failed and we were unable to recover it. 00:29:21.152 [2024-12-10 14:31:21.654014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.654046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.654261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.654295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.654406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.654439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.654555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.654588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.654761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.654794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.654962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.654995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.655110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.655144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.655369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.655410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.655520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.655554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.655761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.655795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.655980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.656013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.656130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.656163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.656361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.656395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.656515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.656550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.656655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.656688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.656873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.656908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.657021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.657054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.657161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.657194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.657356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.657390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.657640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.657673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.657864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.657898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.658109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.658145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.658278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.658314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.658426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.658459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.658581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.658615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.658796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.658828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.658930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.658963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.659140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.659174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.659361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.659397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.659504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.659537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.659727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.659760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.659874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.659908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.660014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.660047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.660228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.660263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.660386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.660424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.153 qpair failed and we were unable to recover it. 00:29:21.153 [2024-12-10 14:31:21.660547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.153 [2024-12-10 14:31:21.660579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.660770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.660803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.660921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.660954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.661089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.661122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.661297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.661332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:21.154 [2024-12-10 14:31:21.661461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.661495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.661601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.661632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:21.154 [2024-12-10 14:31:21.661805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.661840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.661946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.661979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.662090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.662124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.154 [2024-12-10 14:31:21.662315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.662351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.154 [2024-12-10 14:31:21.662531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.662567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.662678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.662712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.662819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.662853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.663031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.663064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.663248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.663282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.663405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.663439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.663555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.663587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.663778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.663810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.663917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.663951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.664191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.664233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.664422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.664453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.664635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.664669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.664846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.664878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.665007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.665042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.665230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.665265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.665391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.665423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.665595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.665628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.665803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.665837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.666013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.666047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.666158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.666191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.666317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.666351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.666530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.666563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.666764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.666798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.666914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.666947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.667239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.667273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.667459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.667490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.154 [2024-12-10 14:31:21.667612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.154 [2024-12-10 14:31:21.667652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.154 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.667783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.667816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.667995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.668027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.668146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.668178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.668426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.668461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.668569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.668601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.668701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.668735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.668920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.668952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.669060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.669093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.669291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.669325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.669504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.669536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.669652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.669684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.669789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.669822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.669981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.670013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.670237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.670271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.670399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.670432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.670537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.670569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.670759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.670792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.670919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.670951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.671070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.671102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.671274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.671306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.671417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.671450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.671553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.671585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.671761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.671794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.671912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.671944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.672120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.672153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.672288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.672325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6370000b90 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.672444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.672483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.672665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.672698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.672817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.672849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.673026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.673059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.673236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.673270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.673385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.673417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.673525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.673557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.673682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.673715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.673899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.673932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.674198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.674244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.674362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.674394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.674567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.674599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.674700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.155 [2024-12-10 14:31:21.674732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.155 qpair failed and we were unable to recover it. 00:29:21.155 [2024-12-10 14:31:21.674855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.674889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.675070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.675102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.675280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.675314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.675508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.675540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.675715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.675747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.675986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.676020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.676152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.676185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.676313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.676346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.676446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.676479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.676655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.676688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.676876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.676908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.677041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.677074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.677181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.677214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.677410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.677441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.677576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.677613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.677723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.677755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.677860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.677893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.677999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.678032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.678151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.678184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.678318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.678352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.678457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.678489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.678664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.678697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.678872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.678905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.679077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.679110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.679300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.679333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.679628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.679661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.679842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.679875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.679999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.680031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.680235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.680270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.680381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.680414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.680653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.680686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.680823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.680856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.681029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.681061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.681189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.681231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.681409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.681442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.681571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.681604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.681717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.681749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.681902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.681935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.682040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.682073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.156 [2024-12-10 14:31:21.682188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.156 [2024-12-10 14:31:21.682230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.156 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.682348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.682379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.682487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.682525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.682698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.682731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.683000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.683033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.683206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.683250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.683493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.683527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.683718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.683750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.683940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.683973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.684094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.684128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.684305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.684340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.684448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.684481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.684583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.684618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.684732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.684765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.684880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.684914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.685046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.685079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.685250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.685287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.685500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.685534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.685801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.685836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.686012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.686046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.686253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.686287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.686521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.686555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.686792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.686825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.686952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.686987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.687174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.687208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.687345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.687378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.687486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.687519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.687704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.687738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.687917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.687949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.688075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.688108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.688311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.688346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.688453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.688487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.688658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.688691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.688797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.688831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.689002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.689035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.689226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.157 [2024-12-10 14:31:21.689260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.157 qpair failed and we were unable to recover it. 00:29:21.157 [2024-12-10 14:31:21.689438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.689472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.689662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.689695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.689964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.689997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 Malloc0 00:29:21.158 [2024-12-10 14:31:21.690125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.690166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.690357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.690391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.690563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.690596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.690776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.690810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.158 [2024-12-10 14:31:21.691024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.691062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.691242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.691276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.691465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.691499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.158 [2024-12-10 14:31:21.691690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.691724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.691838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.691870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.158 [2024-12-10 14:31:21.692073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.692106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.692241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.692273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.692396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.692428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.692684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.692717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.692837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.692868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.693071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.693104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.693296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.693331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.693578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.693612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.693849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.693881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.694080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.694112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.694293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.694327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.694446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.694477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.694591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.694623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.694806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.694839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.694977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.695008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.695135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.695168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.695445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.695479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.695742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.695774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.696018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.696050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.696185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.696226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.696361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.696395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.696512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.696544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.696668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.158 [2024-12-10 14:31:21.696701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.158 qpair failed and we were unable to recover it. 00:29:21.158 [2024-12-10 14:31:21.696873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.696907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.697044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.697076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.697196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.697240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.697413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.697445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.697635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.697666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.697762] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:21.159 [2024-12-10 14:31:21.697841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.697872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.698007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.698039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.698226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.698261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.698522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.698554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.698782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.698816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.698931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.698963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.699241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.699276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.699465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.699499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.699608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.699641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.699831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.699864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.700054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.700087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.700231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.700266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.700532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.700565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.700809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.700842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.700971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.701003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.701128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.701161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.701340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.701374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.701502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.701533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.701785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.701824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.701933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.701964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.702162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.702196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.702389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.702423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.702540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.702572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.702692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.702724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.702866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.702899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.703172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.703205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.703395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.703428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.703558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.703590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.703719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.703751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.704027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.704060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.704163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.704197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.704407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.704440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.704552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.704584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.159 [2024-12-10 14:31:21.704797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.159 [2024-12-10 14:31:21.704829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.159 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.705003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.705036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.705298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.705331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.705435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.705466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.705664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.705696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.705824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.705856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.706060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.706093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.706284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.706318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.160 [2024-12-10 14:31:21.706501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.706535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.706661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.706692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:21.160 [2024-12-10 14:31:21.706896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.706928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.160 [2024-12-10 14:31:21.707147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.707181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.160 [2024-12-10 14:31:21.707375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.707408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.707522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.707555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.707683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.707715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.707894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.707926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.708195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.708237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.708415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.708448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.708633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.708668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.708876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.708907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.709091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.709124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.709252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.709287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.709395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.709426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.709646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.709685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.709870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.709903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.710023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.710055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.710305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.710337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.710581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.710614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.710850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.710882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.711007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.711040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.711231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.711265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.711388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.711420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.711544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.711576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.711766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.711799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.711986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.712019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.712152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.712185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.712421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.712467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.160 [2024-12-10 14:31:21.712656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.160 [2024-12-10 14:31:21.712689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.160 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.712875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.712907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.713094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.713127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.713322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.713356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.713541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.713573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.713677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.713710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.713835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.713867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.714114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.714146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.714281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.714316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.161 [2024-12-10 14:31:21.714504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.714537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.714642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.714675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:21.161 [2024-12-10 14:31:21.714848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.714882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.715054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.161 [2024-12-10 14:31:21.715093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.715295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.715331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.161 [2024-12-10 14:31:21.715509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.715542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.715649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.715682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.715878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.715911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.716103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.716136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.716413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.716446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.716656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.716688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.716861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.716894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.717063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.717095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.717215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.717272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.717466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.717499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.717738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.717770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240d500 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.717905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.717948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.718088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.718126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.718237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.718271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.718484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.718516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.718713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.718745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.718877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.718911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.719035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.719067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.719346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.719380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.719658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.719691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.719876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.719912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.720039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.720070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.720194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.720232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.720474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.720508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.161 qpair failed and we were unable to recover it. 00:29:21.161 [2024-12-10 14:31:21.720672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.161 [2024-12-10 14:31:21.720711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.720978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.721010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.721189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.721229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.721473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.721506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.721774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.721808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.721942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.721976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.722163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.722196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.722317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.722351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.162 [2024-12-10 14:31:21.722468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.722501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.722764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:21.162 [2024-12-10 14:31:21.722797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.722928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.722961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.162 [2024-12-10 14:31:21.723154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.723188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6364000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.723382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.162 [2024-12-10 14:31:21.723425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.723699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.723733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.723926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.723958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.724213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.724257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.724507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.724540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.724662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.724695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.724918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.724950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.725085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.725118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.725237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.725271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.725392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.725424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.725533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.725565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.725782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.162 [2024-12-10 14:31:21.725815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6368000b90 with addr=10.0.0.2, port=4420 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 [2024-12-10 14:31:21.726015] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.162 [2024-12-10 14:31:21.728451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.162 [2024-12-10 14:31:21.728571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.162 [2024-12-10 14:31:21.728625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.162 [2024-12-10 14:31:21.728650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.162 [2024-12-10 14:31:21.728672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.162 [2024-12-10 14:31:21.728724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.162 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:21.162 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.162 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.162 [2024-12-10 14:31:21.738357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.162 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.162 [2024-12-10 14:31:21.738477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.162 [2024-12-10 14:31:21.738508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.162 [2024-12-10 14:31:21.738525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.162 [2024-12-10 14:31:21.738540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.162 [2024-12-10 14:31:21.738575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.162 qpair failed and we were unable to recover it. 00:29:21.162 14:31:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1815804 00:29:21.162 [2024-12-10 14:31:21.748341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.163 [2024-12-10 14:31:21.748409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.163 [2024-12-10 14:31:21.748430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.163 [2024-12-10 14:31:21.748441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.163 [2024-12-10 14:31:21.748452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.163 [2024-12-10 14:31:21.748475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.163 qpair failed and we were unable to recover it. 00:29:21.163 [2024-12-10 14:31:21.758412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.163 [2024-12-10 14:31:21.758511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.163 [2024-12-10 14:31:21.758526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.163 [2024-12-10 14:31:21.758534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.163 [2024-12-10 14:31:21.758541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.163 [2024-12-10 14:31:21.758562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.163 qpair failed and we were unable to recover it. 00:29:21.163 [2024-12-10 14:31:21.768339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.163 [2024-12-10 14:31:21.768398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.163 [2024-12-10 14:31:21.768411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.163 [2024-12-10 14:31:21.768418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.163 [2024-12-10 14:31:21.768424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.163 [2024-12-10 14:31:21.768439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.163 qpair failed and we were unable to recover it. 00:29:21.163 [2024-12-10 14:31:21.778336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.163 [2024-12-10 14:31:21.778389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.163 [2024-12-10 14:31:21.778402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.163 [2024-12-10 14:31:21.778410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.163 [2024-12-10 14:31:21.778416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.163 [2024-12-10 14:31:21.778431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.163 qpair failed and we were unable to recover it. 00:29:21.163 [2024-12-10 14:31:21.788326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.163 [2024-12-10 14:31:21.788389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.163 [2024-12-10 14:31:21.788403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.163 [2024-12-10 14:31:21.788410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.163 [2024-12-10 14:31:21.788417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.163 [2024-12-10 14:31:21.788432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.163 qpair failed and we were unable to recover it. 00:29:21.163 [2024-12-10 14:31:21.798400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.163 [2024-12-10 14:31:21.798458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.163 [2024-12-10 14:31:21.798471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.163 [2024-12-10 14:31:21.798479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.163 [2024-12-10 14:31:21.798485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.163 [2024-12-10 14:31:21.798499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.163 qpair failed and we were unable to recover it. 00:29:21.163 [2024-12-10 14:31:21.808447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.163 [2024-12-10 14:31:21.808506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.163 [2024-12-10 14:31:21.808519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.163 [2024-12-10 14:31:21.808528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.163 [2024-12-10 14:31:21.808534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.163 [2024-12-10 14:31:21.808548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.163 qpair failed and we were unable to recover it. 00:29:21.163 [2024-12-10 14:31:21.818526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.163 [2024-12-10 14:31:21.818575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.163 [2024-12-10 14:31:21.818588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.163 [2024-12-10 14:31:21.818595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.163 [2024-12-10 14:31:21.818602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.163 [2024-12-10 14:31:21.818616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.163 qpair failed and we were unable to recover it. 00:29:21.163 [2024-12-10 14:31:21.828493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.163 [2024-12-10 14:31:21.828558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.163 [2024-12-10 14:31:21.828572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.163 [2024-12-10 14:31:21.828580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.163 [2024-12-10 14:31:21.828586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.163 [2024-12-10 14:31:21.828601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.163 qpair failed and we were unable to recover it. 00:29:21.163 [2024-12-10 14:31:21.838570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.163 [2024-12-10 14:31:21.838673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.163 [2024-12-10 14:31:21.838687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.163 [2024-12-10 14:31:21.838694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.163 [2024-12-10 14:31:21.838700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.163 [2024-12-10 14:31:21.838715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.163 qpair failed and we were unable to recover it. 00:29:21.163 [2024-12-10 14:31:21.848580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.163 [2024-12-10 14:31:21.848633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.163 [2024-12-10 14:31:21.848649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.163 [2024-12-10 14:31:21.848656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.163 [2024-12-10 14:31:21.848663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.163 [2024-12-10 14:31:21.848677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.163 qpair failed and we were unable to recover it. 00:29:21.163 [2024-12-10 14:31:21.858553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.163 [2024-12-10 14:31:21.858606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.163 [2024-12-10 14:31:21.858619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.163 [2024-12-10 14:31:21.858627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.163 [2024-12-10 14:31:21.858634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.163 [2024-12-10 14:31:21.858649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.163 qpair failed and we were unable to recover it. 00:29:21.460 [2024-12-10 14:31:21.868596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.460 [2024-12-10 14:31:21.868667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.460 [2024-12-10 14:31:21.868693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.460 [2024-12-10 14:31:21.868701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.460 [2024-12-10 14:31:21.868708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.460 [2024-12-10 14:31:21.868729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.460 qpair failed and we were unable to recover it. 00:29:21.460 [2024-12-10 14:31:21.878620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.460 [2024-12-10 14:31:21.878681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.460 [2024-12-10 14:31:21.878694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.460 [2024-12-10 14:31:21.878702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.460 [2024-12-10 14:31:21.878708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.460 [2024-12-10 14:31:21.878723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.460 qpair failed and we were unable to recover it. 00:29:21.460 [2024-12-10 14:31:21.888660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.460 [2024-12-10 14:31:21.888717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.460 [2024-12-10 14:31:21.888730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.460 [2024-12-10 14:31:21.888738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.460 [2024-12-10 14:31:21.888748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.460 [2024-12-10 14:31:21.888763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.460 qpair failed and we were unable to recover it. 00:29:21.460 [2024-12-10 14:31:21.898681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.460 [2024-12-10 14:31:21.898731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.460 [2024-12-10 14:31:21.898744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.460 [2024-12-10 14:31:21.898751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.460 [2024-12-10 14:31:21.898757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.460 [2024-12-10 14:31:21.898772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.460 qpair failed and we were unable to recover it. 00:29:21.460 [2024-12-10 14:31:21.908726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.460 [2024-12-10 14:31:21.908798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.460 [2024-12-10 14:31:21.908812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.460 [2024-12-10 14:31:21.908819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.460 [2024-12-10 14:31:21.908825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.460 [2024-12-10 14:31:21.908840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.460 qpair failed and we were unable to recover it. 00:29:21.460 [2024-12-10 14:31:21.918754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.460 [2024-12-10 14:31:21.918857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.460 [2024-12-10 14:31:21.918871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.460 [2024-12-10 14:31:21.918878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.460 [2024-12-10 14:31:21.918884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.460 [2024-12-10 14:31:21.918899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.461 qpair failed and we were unable to recover it. 00:29:21.461 [2024-12-10 14:31:21.928755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.461 [2024-12-10 14:31:21.928825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.461 [2024-12-10 14:31:21.928839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.461 [2024-12-10 14:31:21.928846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.461 [2024-12-10 14:31:21.928852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.461 [2024-12-10 14:31:21.928866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.461 qpair failed and we were unable to recover it. 00:29:21.461 [2024-12-10 14:31:21.938833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.461 [2024-12-10 14:31:21.938889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.461 [2024-12-10 14:31:21.938903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.461 [2024-12-10 14:31:21.938910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.461 [2024-12-10 14:31:21.938917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.461 [2024-12-10 14:31:21.938931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.461 qpair failed and we were unable to recover it. 00:29:21.461 [2024-12-10 14:31:21.948804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.461 [2024-12-10 14:31:21.948855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.461 [2024-12-10 14:31:21.948868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.461 [2024-12-10 14:31:21.948875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.461 [2024-12-10 14:31:21.948881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.461 [2024-12-10 14:31:21.948895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.461 qpair failed and we were unable to recover it. 00:29:21.461 [2024-12-10 14:31:21.958893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.461 [2024-12-10 14:31:21.958951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.461 [2024-12-10 14:31:21.958963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.461 [2024-12-10 14:31:21.958970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.461 [2024-12-10 14:31:21.958976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.461 [2024-12-10 14:31:21.958991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.461 qpair failed and we were unable to recover it. 00:29:21.461 [2024-12-10 14:31:21.968866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.461 [2024-12-10 14:31:21.968922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.461 [2024-12-10 14:31:21.968935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.461 [2024-12-10 14:31:21.968942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.461 [2024-12-10 14:31:21.968948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.461 [2024-12-10 14:31:21.968962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.461 qpair failed and we were unable to recover it. 00:29:21.461 [2024-12-10 14:31:21.978865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.461 [2024-12-10 14:31:21.978920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.461 [2024-12-10 14:31:21.978936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.461 [2024-12-10 14:31:21.978943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.461 [2024-12-10 14:31:21.978949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.461 [2024-12-10 14:31:21.978964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.461 qpair failed and we were unable to recover it. 00:29:21.461 [2024-12-10 14:31:21.988916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.461 [2024-12-10 14:31:21.988969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.461 [2024-12-10 14:31:21.988981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.461 [2024-12-10 14:31:21.988988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.461 [2024-12-10 14:31:21.988995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.461 [2024-12-10 14:31:21.989010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.461 qpair failed and we were unable to recover it. 00:29:21.461 [2024-12-10 14:31:21.998931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.461 [2024-12-10 14:31:21.998993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.461 [2024-12-10 14:31:21.999006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.461 [2024-12-10 14:31:21.999013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.461 [2024-12-10 14:31:21.999020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.461 [2024-12-10 14:31:21.999034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.461 qpair failed and we were unable to recover it. 00:29:21.461 [2024-12-10 14:31:22.008987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.461 [2024-12-10 14:31:22.009039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.461 [2024-12-10 14:31:22.009052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.461 [2024-12-10 14:31:22.009060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.461 [2024-12-10 14:31:22.009066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.461 [2024-12-10 14:31:22.009081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.461 qpair failed and we were unable to recover it. 00:29:21.461 [2024-12-10 14:31:22.018998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.461 [2024-12-10 14:31:22.019056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.461 [2024-12-10 14:31:22.019069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.461 [2024-12-10 14:31:22.019079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.461 [2024-12-10 14:31:22.019086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.461 [2024-12-10 14:31:22.019100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.461 qpair failed and we were unable to recover it. 00:29:21.461 [2024-12-10 14:31:22.029025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.461 [2024-12-10 14:31:22.029081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.461 [2024-12-10 14:31:22.029094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.461 [2024-12-10 14:31:22.029101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.461 [2024-12-10 14:31:22.029107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.461 [2024-12-10 14:31:22.029121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.461 qpair failed and we were unable to recover it. 00:29:21.461 [2024-12-10 14:31:22.039108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.461 [2024-12-10 14:31:22.039163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.461 [2024-12-10 14:31:22.039176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.461 [2024-12-10 14:31:22.039183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.461 [2024-12-10 14:31:22.039189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.461 [2024-12-10 14:31:22.039203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.461 qpair failed and we were unable to recover it. 00:29:21.461 [2024-12-10 14:31:22.049121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.461 [2024-12-10 14:31:22.049175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.461 [2024-12-10 14:31:22.049188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.461 [2024-12-10 14:31:22.049195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.461 [2024-12-10 14:31:22.049202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.461 [2024-12-10 14:31:22.049221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.461 qpair failed and we were unable to recover it. 00:29:21.461 [2024-12-10 14:31:22.059114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.461 [2024-12-10 14:31:22.059166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.462 [2024-12-10 14:31:22.059179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.462 [2024-12-10 14:31:22.059186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.462 [2024-12-10 14:31:22.059192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.462 [2024-12-10 14:31:22.059207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.462 qpair failed and we were unable to recover it. 00:29:21.462 [2024-12-10 14:31:22.069167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.462 [2024-12-10 14:31:22.069253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.462 [2024-12-10 14:31:22.069267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.462 [2024-12-10 14:31:22.069274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.462 [2024-12-10 14:31:22.069280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.462 [2024-12-10 14:31:22.069295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.462 qpair failed and we were unable to recover it. 00:29:21.462 [2024-12-10 14:31:22.079180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.462 [2024-12-10 14:31:22.079254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.462 [2024-12-10 14:31:22.079268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.462 [2024-12-10 14:31:22.079275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.462 [2024-12-10 14:31:22.079281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.462 [2024-12-10 14:31:22.079296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.462 qpair failed and we were unable to recover it. 00:29:21.462 [2024-12-10 14:31:22.089122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.462 [2024-12-10 14:31:22.089209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.462 [2024-12-10 14:31:22.089226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.462 [2024-12-10 14:31:22.089233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.462 [2024-12-10 14:31:22.089239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.462 [2024-12-10 14:31:22.089255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.462 qpair failed and we were unable to recover it. 00:29:21.462 [2024-12-10 14:31:22.099232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.462 [2024-12-10 14:31:22.099291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.462 [2024-12-10 14:31:22.099304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.462 [2024-12-10 14:31:22.099311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.462 [2024-12-10 14:31:22.099317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.462 [2024-12-10 14:31:22.099333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.462 qpair failed and we were unable to recover it. 00:29:21.462 [2024-12-10 14:31:22.109252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.462 [2024-12-10 14:31:22.109337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.462 [2024-12-10 14:31:22.109350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.462 [2024-12-10 14:31:22.109358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.462 [2024-12-10 14:31:22.109364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.462 [2024-12-10 14:31:22.109379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.462 qpair failed and we were unable to recover it. 00:29:21.462 [2024-12-10 14:31:22.119339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.462 [2024-12-10 14:31:22.119430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.462 [2024-12-10 14:31:22.119443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.462 [2024-12-10 14:31:22.119450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.462 [2024-12-10 14:31:22.119457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.462 [2024-12-10 14:31:22.119472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.462 qpair failed and we were unable to recover it. 00:29:21.462 [2024-12-10 14:31:22.129325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.462 [2024-12-10 14:31:22.129381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.462 [2024-12-10 14:31:22.129394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.462 [2024-12-10 14:31:22.129401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.462 [2024-12-10 14:31:22.129407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.462 [2024-12-10 14:31:22.129422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.462 qpair failed and we were unable to recover it. 00:29:21.462 [2024-12-10 14:31:22.139344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.462 [2024-12-10 14:31:22.139397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.462 [2024-12-10 14:31:22.139411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.462 [2024-12-10 14:31:22.139418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.462 [2024-12-10 14:31:22.139424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.462 [2024-12-10 14:31:22.139439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.462 qpair failed and we were unable to recover it. 00:29:21.462 [2024-12-10 14:31:22.149354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.462 [2024-12-10 14:31:22.149426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.462 [2024-12-10 14:31:22.149439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.462 [2024-12-10 14:31:22.149449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.462 [2024-12-10 14:31:22.149455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.462 [2024-12-10 14:31:22.149469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.462 qpair failed and we were unable to recover it. 00:29:21.462 [2024-12-10 14:31:22.159406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.462 [2024-12-10 14:31:22.159467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.462 [2024-12-10 14:31:22.159480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.462 [2024-12-10 14:31:22.159487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.462 [2024-12-10 14:31:22.159494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.462 [2024-12-10 14:31:22.159509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.462 qpair failed and we were unable to recover it. 00:29:21.462 [2024-12-10 14:31:22.169432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.462 [2024-12-10 14:31:22.169491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.462 [2024-12-10 14:31:22.169503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.462 [2024-12-10 14:31:22.169512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.462 [2024-12-10 14:31:22.169518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.462 [2024-12-10 14:31:22.169533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.462 qpair failed and we were unable to recover it. 00:29:21.462 [2024-12-10 14:31:22.179460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.462 [2024-12-10 14:31:22.179516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.462 [2024-12-10 14:31:22.179529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.462 [2024-12-10 14:31:22.179536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.462 [2024-12-10 14:31:22.179542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.462 [2024-12-10 14:31:22.179556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.462 qpair failed and we were unable to recover it. 00:29:21.462 [2024-12-10 14:31:22.189460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.462 [2024-12-10 14:31:22.189523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.462 [2024-12-10 14:31:22.189536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.463 [2024-12-10 14:31:22.189543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.463 [2024-12-10 14:31:22.189549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.463 [2024-12-10 14:31:22.189566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.463 qpair failed and we were unable to recover it. 00:29:21.727 [2024-12-10 14:31:22.199536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.727 [2024-12-10 14:31:22.199594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.727 [2024-12-10 14:31:22.199610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.727 [2024-12-10 14:31:22.199618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.727 [2024-12-10 14:31:22.199628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.727 [2024-12-10 14:31:22.199646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.727 qpair failed and we were unable to recover it. 00:29:21.727 [2024-12-10 14:31:22.209542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.727 [2024-12-10 14:31:22.209598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.727 [2024-12-10 14:31:22.209611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.727 [2024-12-10 14:31:22.209619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.727 [2024-12-10 14:31:22.209625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.727 [2024-12-10 14:31:22.209640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.727 qpair failed and we were unable to recover it. 00:29:21.727 [2024-12-10 14:31:22.219563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.727 [2024-12-10 14:31:22.219629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.727 [2024-12-10 14:31:22.219642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.727 [2024-12-10 14:31:22.219649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.727 [2024-12-10 14:31:22.219655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.727 [2024-12-10 14:31:22.219670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.727 qpair failed and we were unable to recover it. 00:29:21.727 [2024-12-10 14:31:22.229577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.727 [2024-12-10 14:31:22.229647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.727 [2024-12-10 14:31:22.229660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.727 [2024-12-10 14:31:22.229667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.727 [2024-12-10 14:31:22.229672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.727 [2024-12-10 14:31:22.229688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.727 qpair failed and we were unable to recover it. 00:29:21.727 [2024-12-10 14:31:22.239619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.727 [2024-12-10 14:31:22.239678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.727 [2024-12-10 14:31:22.239691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.727 [2024-12-10 14:31:22.239698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.727 [2024-12-10 14:31:22.239705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.727 [2024-12-10 14:31:22.239719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.727 qpair failed and we were unable to recover it. 00:29:21.727 [2024-12-10 14:31:22.249616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.727 [2024-12-10 14:31:22.249673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.727 [2024-12-10 14:31:22.249686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.727 [2024-12-10 14:31:22.249693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.727 [2024-12-10 14:31:22.249699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.727 [2024-12-10 14:31:22.249714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.727 qpair failed and we were unable to recover it. 00:29:21.727 [2024-12-10 14:31:22.259676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.727 [2024-12-10 14:31:22.259730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.727 [2024-12-10 14:31:22.259743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.727 [2024-12-10 14:31:22.259750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.727 [2024-12-10 14:31:22.259756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.727 [2024-12-10 14:31:22.259770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.727 qpair failed and we were unable to recover it. 00:29:21.727 [2024-12-10 14:31:22.269692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.727 [2024-12-10 14:31:22.269768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.727 [2024-12-10 14:31:22.269782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.727 [2024-12-10 14:31:22.269789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.727 [2024-12-10 14:31:22.269795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.727 [2024-12-10 14:31:22.269810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.727 qpair failed and we were unable to recover it. 00:29:21.727 [2024-12-10 14:31:22.279744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.728 [2024-12-10 14:31:22.279814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.728 [2024-12-10 14:31:22.279831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.728 [2024-12-10 14:31:22.279838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.728 [2024-12-10 14:31:22.279843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.728 [2024-12-10 14:31:22.279858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.728 qpair failed and we were unable to recover it. 00:29:21.728 [2024-12-10 14:31:22.289790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.728 [2024-12-10 14:31:22.289842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.728 [2024-12-10 14:31:22.289856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.728 [2024-12-10 14:31:22.289862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.728 [2024-12-10 14:31:22.289868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.728 [2024-12-10 14:31:22.289883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.728 qpair failed and we were unable to recover it. 00:29:21.728 [2024-12-10 14:31:22.299769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.728 [2024-12-10 14:31:22.299817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.728 [2024-12-10 14:31:22.299830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.728 [2024-12-10 14:31:22.299837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.728 [2024-12-10 14:31:22.299843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.728 [2024-12-10 14:31:22.299857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.728 qpair failed and we were unable to recover it. 00:29:21.728 [2024-12-10 14:31:22.309731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.728 [2024-12-10 14:31:22.309788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.728 [2024-12-10 14:31:22.309801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.728 [2024-12-10 14:31:22.309808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.728 [2024-12-10 14:31:22.309815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.728 [2024-12-10 14:31:22.309830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.728 qpair failed and we were unable to recover it. 00:29:21.728 [2024-12-10 14:31:22.319836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.728 [2024-12-10 14:31:22.319896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.728 [2024-12-10 14:31:22.319909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.728 [2024-12-10 14:31:22.319916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.728 [2024-12-10 14:31:22.319923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.728 [2024-12-10 14:31:22.319941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.728 qpair failed and we were unable to recover it. 00:29:21.728 [2024-12-10 14:31:22.329851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.728 [2024-12-10 14:31:22.329919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.728 [2024-12-10 14:31:22.329932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.728 [2024-12-10 14:31:22.329939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.728 [2024-12-10 14:31:22.329945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.728 [2024-12-10 14:31:22.329960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.728 qpair failed and we were unable to recover it. 00:29:21.728 [2024-12-10 14:31:22.339891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.728 [2024-12-10 14:31:22.339948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.728 [2024-12-10 14:31:22.339961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.728 [2024-12-10 14:31:22.339968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.728 [2024-12-10 14:31:22.339975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.728 [2024-12-10 14:31:22.339990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.728 qpair failed and we were unable to recover it. 00:29:21.728 [2024-12-10 14:31:22.349922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.728 [2024-12-10 14:31:22.349977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.728 [2024-12-10 14:31:22.349990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.728 [2024-12-10 14:31:22.349998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.728 [2024-12-10 14:31:22.350004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.728 [2024-12-10 14:31:22.350019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.728 qpair failed and we were unable to recover it. 00:29:21.728 [2024-12-10 14:31:22.359955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.728 [2024-12-10 14:31:22.360009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.728 [2024-12-10 14:31:22.360022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.728 [2024-12-10 14:31:22.360030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.728 [2024-12-10 14:31:22.360036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.728 [2024-12-10 14:31:22.360051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.728 qpair failed and we were unable to recover it. 00:29:21.728 [2024-12-10 14:31:22.369980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.728 [2024-12-10 14:31:22.370029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.728 [2024-12-10 14:31:22.370043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.728 [2024-12-10 14:31:22.370050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.728 [2024-12-10 14:31:22.370056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.728 [2024-12-10 14:31:22.370071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.728 qpair failed and we were unable to recover it. 00:29:21.728 [2024-12-10 14:31:22.380008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.728 [2024-12-10 14:31:22.380060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.728 [2024-12-10 14:31:22.380074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.728 [2024-12-10 14:31:22.380080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.728 [2024-12-10 14:31:22.380086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.728 [2024-12-10 14:31:22.380101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.728 qpair failed and we were unable to recover it. 00:29:21.728 [2024-12-10 14:31:22.390026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.728 [2024-12-10 14:31:22.390077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.728 [2024-12-10 14:31:22.390090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.728 [2024-12-10 14:31:22.390097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.728 [2024-12-10 14:31:22.390103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.728 [2024-12-10 14:31:22.390118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.728 qpair failed and we were unable to recover it. 00:29:21.728 [2024-12-10 14:31:22.400106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.728 [2024-12-10 14:31:22.400161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.728 [2024-12-10 14:31:22.400174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.728 [2024-12-10 14:31:22.400181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.728 [2024-12-10 14:31:22.400187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.728 [2024-12-10 14:31:22.400201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.728 qpair failed and we were unable to recover it. 00:29:21.728 [2024-12-10 14:31:22.410093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.728 [2024-12-10 14:31:22.410150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.728 [2024-12-10 14:31:22.410167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.728 [2024-12-10 14:31:22.410174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.728 [2024-12-10 14:31:22.410180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.729 [2024-12-10 14:31:22.410195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.729 qpair failed and we were unable to recover it. 00:29:21.729 [2024-12-10 14:31:22.420046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.729 [2024-12-10 14:31:22.420105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.729 [2024-12-10 14:31:22.420118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.729 [2024-12-10 14:31:22.420125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.729 [2024-12-10 14:31:22.420131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.729 [2024-12-10 14:31:22.420146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.729 qpair failed and we were unable to recover it. 00:29:21.729 [2024-12-10 14:31:22.430128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.729 [2024-12-10 14:31:22.430183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.729 [2024-12-10 14:31:22.430197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.729 [2024-12-10 14:31:22.430204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.729 [2024-12-10 14:31:22.430211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.729 [2024-12-10 14:31:22.430230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.729 qpair failed and we were unable to recover it. 00:29:21.729 [2024-12-10 14:31:22.440185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.729 [2024-12-10 14:31:22.440299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.729 [2024-12-10 14:31:22.440321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.729 [2024-12-10 14:31:22.440331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.729 [2024-12-10 14:31:22.440338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.729 [2024-12-10 14:31:22.440354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.729 qpair failed and we were unable to recover it. 00:29:21.729 [2024-12-10 14:31:22.450207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.729 [2024-12-10 14:31:22.450266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.729 [2024-12-10 14:31:22.450279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.729 [2024-12-10 14:31:22.450286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.729 [2024-12-10 14:31:22.450296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.729 [2024-12-10 14:31:22.450311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.729 qpair failed and we were unable to recover it. 00:29:21.729 [2024-12-10 14:31:22.460248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.729 [2024-12-10 14:31:22.460302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.729 [2024-12-10 14:31:22.460315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.729 [2024-12-10 14:31:22.460322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.729 [2024-12-10 14:31:22.460329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.729 [2024-12-10 14:31:22.460344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.729 qpair failed and we were unable to recover it. 00:29:21.996 [2024-12-10 14:31:22.470287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.996 [2024-12-10 14:31:22.470369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.996 [2024-12-10 14:31:22.470383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.996 [2024-12-10 14:31:22.470403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.996 [2024-12-10 14:31:22.470410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.996 [2024-12-10 14:31:22.470424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.996 qpair failed and we were unable to recover it. 00:29:21.996 [2024-12-10 14:31:22.480296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.996 [2024-12-10 14:31:22.480352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.996 [2024-12-10 14:31:22.480365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.996 [2024-12-10 14:31:22.480372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.996 [2024-12-10 14:31:22.480379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.996 [2024-12-10 14:31:22.480394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.996 qpair failed and we were unable to recover it. 00:29:21.996 [2024-12-10 14:31:22.490308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.996 [2024-12-10 14:31:22.490380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.996 [2024-12-10 14:31:22.490393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.996 [2024-12-10 14:31:22.490400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.996 [2024-12-10 14:31:22.490407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.996 [2024-12-10 14:31:22.490423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.996 qpair failed and we were unable to recover it. 00:29:21.996 [2024-12-10 14:31:22.500380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.996 [2024-12-10 14:31:22.500437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.996 [2024-12-10 14:31:22.500450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.996 [2024-12-10 14:31:22.500457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.996 [2024-12-10 14:31:22.500462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.996 [2024-12-10 14:31:22.500478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.996 qpair failed and we were unable to recover it. 00:29:21.996 [2024-12-10 14:31:22.510409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.996 [2024-12-10 14:31:22.510487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.996 [2024-12-10 14:31:22.510501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.996 [2024-12-10 14:31:22.510509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.996 [2024-12-10 14:31:22.510516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.996 [2024-12-10 14:31:22.510531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.996 qpair failed and we were unable to recover it. 00:29:21.996 [2024-12-10 14:31:22.520359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.996 [2024-12-10 14:31:22.520415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.996 [2024-12-10 14:31:22.520428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.996 [2024-12-10 14:31:22.520435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.996 [2024-12-10 14:31:22.520442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.997 [2024-12-10 14:31:22.520457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.997 qpair failed and we were unable to recover it. 00:29:21.997 [2024-12-10 14:31:22.530499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.997 [2024-12-10 14:31:22.530560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.997 [2024-12-10 14:31:22.530573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.997 [2024-12-10 14:31:22.530580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.997 [2024-12-10 14:31:22.530586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.997 [2024-12-10 14:31:22.530601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.997 qpair failed and we were unable to recover it. 00:29:21.997 [2024-12-10 14:31:22.540494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.997 [2024-12-10 14:31:22.540544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.997 [2024-12-10 14:31:22.540561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.997 [2024-12-10 14:31:22.540567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.997 [2024-12-10 14:31:22.540573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.997 [2024-12-10 14:31:22.540587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.997 qpair failed and we were unable to recover it. 00:29:21.997 [2024-12-10 14:31:22.550493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.997 [2024-12-10 14:31:22.550571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.997 [2024-12-10 14:31:22.550585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.997 [2024-12-10 14:31:22.550592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.997 [2024-12-10 14:31:22.550598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.997 [2024-12-10 14:31:22.550612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.997 qpair failed and we were unable to recover it. 00:29:21.997 [2024-12-10 14:31:22.560614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.997 [2024-12-10 14:31:22.560673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.997 [2024-12-10 14:31:22.560685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.997 [2024-12-10 14:31:22.560692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.997 [2024-12-10 14:31:22.560699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.997 [2024-12-10 14:31:22.560713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.997 qpair failed and we were unable to recover it. 00:29:21.997 [2024-12-10 14:31:22.570559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.997 [2024-12-10 14:31:22.570619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.997 [2024-12-10 14:31:22.570633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.997 [2024-12-10 14:31:22.570640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.997 [2024-12-10 14:31:22.570646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.997 [2024-12-10 14:31:22.570661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.997 qpair failed and we were unable to recover it. 00:29:21.997 [2024-12-10 14:31:22.580579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.997 [2024-12-10 14:31:22.580632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.997 [2024-12-10 14:31:22.580645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.997 [2024-12-10 14:31:22.580655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.997 [2024-12-10 14:31:22.580662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.997 [2024-12-10 14:31:22.580677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.997 qpair failed and we were unable to recover it. 00:29:21.997 [2024-12-10 14:31:22.590609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.997 [2024-12-10 14:31:22.590661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.997 [2024-12-10 14:31:22.590675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.997 [2024-12-10 14:31:22.590682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.997 [2024-12-10 14:31:22.590689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.997 [2024-12-10 14:31:22.590704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.997 qpair failed and we were unable to recover it. 00:29:21.997 [2024-12-10 14:31:22.600619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.997 [2024-12-10 14:31:22.600691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.997 [2024-12-10 14:31:22.600705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.997 [2024-12-10 14:31:22.600712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.997 [2024-12-10 14:31:22.600719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.997 [2024-12-10 14:31:22.600733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.997 qpair failed and we were unable to recover it. 00:29:21.997 [2024-12-10 14:31:22.610674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.997 [2024-12-10 14:31:22.610725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.997 [2024-12-10 14:31:22.610738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.997 [2024-12-10 14:31:22.610746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.997 [2024-12-10 14:31:22.610752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.997 [2024-12-10 14:31:22.610767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.997 qpair failed and we were unable to recover it. 00:29:21.997 [2024-12-10 14:31:22.620663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.997 [2024-12-10 14:31:22.620742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.997 [2024-12-10 14:31:22.620755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.997 [2024-12-10 14:31:22.620762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.997 [2024-12-10 14:31:22.620768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.997 [2024-12-10 14:31:22.620783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.997 qpair failed and we were unable to recover it. 00:29:21.997 [2024-12-10 14:31:22.630740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.997 [2024-12-10 14:31:22.630792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.997 [2024-12-10 14:31:22.630805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.997 [2024-12-10 14:31:22.630812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.997 [2024-12-10 14:31:22.630818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.997 [2024-12-10 14:31:22.630832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.997 qpair failed and we were unable to recover it. 00:29:21.997 [2024-12-10 14:31:22.640758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.997 [2024-12-10 14:31:22.640819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.997 [2024-12-10 14:31:22.640831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.997 [2024-12-10 14:31:22.640838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.997 [2024-12-10 14:31:22.640845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.997 [2024-12-10 14:31:22.640860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.997 qpair failed and we were unable to recover it. 00:29:21.997 [2024-12-10 14:31:22.650793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.997 [2024-12-10 14:31:22.650852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.997 [2024-12-10 14:31:22.650865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.997 [2024-12-10 14:31:22.650872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.997 [2024-12-10 14:31:22.650878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.997 [2024-12-10 14:31:22.650892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.997 qpair failed and we were unable to recover it. 00:29:21.997 [2024-12-10 14:31:22.660823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.998 [2024-12-10 14:31:22.660910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.998 [2024-12-10 14:31:22.660923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.998 [2024-12-10 14:31:22.660930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.998 [2024-12-10 14:31:22.660936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.998 [2024-12-10 14:31:22.660950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.998 qpair failed and we were unable to recover it. 00:29:21.998 [2024-12-10 14:31:22.670815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.998 [2024-12-10 14:31:22.670871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.998 [2024-12-10 14:31:22.670883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.998 [2024-12-10 14:31:22.670890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.998 [2024-12-10 14:31:22.670896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.998 [2024-12-10 14:31:22.670911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.998 qpair failed and we were unable to recover it. 00:29:21.998 [2024-12-10 14:31:22.680829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.998 [2024-12-10 14:31:22.680901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.998 [2024-12-10 14:31:22.680914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.998 [2024-12-10 14:31:22.680922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.998 [2024-12-10 14:31:22.680928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.998 [2024-12-10 14:31:22.680943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.998 qpair failed and we were unable to recover it. 00:29:21.998 [2024-12-10 14:31:22.690991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.998 [2024-12-10 14:31:22.691042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.998 [2024-12-10 14:31:22.691056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.998 [2024-12-10 14:31:22.691063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.998 [2024-12-10 14:31:22.691070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.998 [2024-12-10 14:31:22.691084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.998 qpair failed and we were unable to recover it. 00:29:21.998 [2024-12-10 14:31:22.700874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.998 [2024-12-10 14:31:22.700970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.998 [2024-12-10 14:31:22.700984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.998 [2024-12-10 14:31:22.700991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.998 [2024-12-10 14:31:22.700997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.998 [2024-12-10 14:31:22.701013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.998 qpair failed and we were unable to recover it. 00:29:21.998 [2024-12-10 14:31:22.710968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.998 [2024-12-10 14:31:22.711036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.998 [2024-12-10 14:31:22.711049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.998 [2024-12-10 14:31:22.711061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.998 [2024-12-10 14:31:22.711067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.998 [2024-12-10 14:31:22.711083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.998 qpair failed and we were unable to recover it. 00:29:21.998 [2024-12-10 14:31:22.720976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.998 [2024-12-10 14:31:22.721029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.998 [2024-12-10 14:31:22.721042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.998 [2024-12-10 14:31:22.721049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.998 [2024-12-10 14:31:22.721056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.998 [2024-12-10 14:31:22.721070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.998 qpair failed and we were unable to recover it. 00:29:21.998 [2024-12-10 14:31:22.731055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.998 [2024-12-10 14:31:22.731151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.998 [2024-12-10 14:31:22.731165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.998 [2024-12-10 14:31:22.731172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.998 [2024-12-10 14:31:22.731178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:21.998 [2024-12-10 14:31:22.731193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.998 qpair failed and we were unable to recover it. 00:29:22.256 [2024-12-10 14:31:22.741053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.256 [2024-12-10 14:31:22.741109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.256 [2024-12-10 14:31:22.741123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.256 [2024-12-10 14:31:22.741130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.256 [2024-12-10 14:31:22.741136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.256 [2024-12-10 14:31:22.741151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.256 qpair failed and we were unable to recover it. 00:29:22.256 [2024-12-10 14:31:22.751081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.256 [2024-12-10 14:31:22.751137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.256 [2024-12-10 14:31:22.751151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.256 [2024-12-10 14:31:22.751158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.256 [2024-12-10 14:31:22.751164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.256 [2024-12-10 14:31:22.751183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.256 qpair failed and we were unable to recover it. 00:29:22.256 [2024-12-10 14:31:22.761096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.256 [2024-12-10 14:31:22.761151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.256 [2024-12-10 14:31:22.761164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.256 [2024-12-10 14:31:22.761171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.256 [2024-12-10 14:31:22.761177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.256 [2024-12-10 14:31:22.761192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.256 qpair failed and we were unable to recover it. 00:29:22.256 [2024-12-10 14:31:22.771161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.256 [2024-12-10 14:31:22.771221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.256 [2024-12-10 14:31:22.771235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.256 [2024-12-10 14:31:22.771242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.256 [2024-12-10 14:31:22.771248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.256 [2024-12-10 14:31:22.771262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.256 qpair failed and we were unable to recover it. 00:29:22.256 [2024-12-10 14:31:22.781179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.256 [2024-12-10 14:31:22.781250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.256 [2024-12-10 14:31:22.781265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.256 [2024-12-10 14:31:22.781272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.256 [2024-12-10 14:31:22.781278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.256 [2024-12-10 14:31:22.781292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.256 qpair failed and we were unable to recover it. 00:29:22.256 [2024-12-10 14:31:22.791174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.256 [2024-12-10 14:31:22.791236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.256 [2024-12-10 14:31:22.791249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.256 [2024-12-10 14:31:22.791257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.256 [2024-12-10 14:31:22.791263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.256 [2024-12-10 14:31:22.791279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.256 qpair failed and we were unable to recover it. 00:29:22.256 [2024-12-10 14:31:22.801207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.256 [2024-12-10 14:31:22.801272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.256 [2024-12-10 14:31:22.801285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.256 [2024-12-10 14:31:22.801291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.256 [2024-12-10 14:31:22.801297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.256 [2024-12-10 14:31:22.801312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.256 qpair failed and we were unable to recover it. 00:29:22.256 [2024-12-10 14:31:22.811311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.256 [2024-12-10 14:31:22.811406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.256 [2024-12-10 14:31:22.811419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.256 [2024-12-10 14:31:22.811426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.256 [2024-12-10 14:31:22.811432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.256 [2024-12-10 14:31:22.811447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.256 qpair failed and we were unable to recover it. 00:29:22.256 [2024-12-10 14:31:22.821273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.256 [2024-12-10 14:31:22.821327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.256 [2024-12-10 14:31:22.821341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.256 [2024-12-10 14:31:22.821348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.256 [2024-12-10 14:31:22.821354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.256 [2024-12-10 14:31:22.821369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.257 qpair failed and we were unable to recover it. 00:29:22.257 [2024-12-10 14:31:22.831236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.257 [2024-12-10 14:31:22.831289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.257 [2024-12-10 14:31:22.831303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.257 [2024-12-10 14:31:22.831310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.257 [2024-12-10 14:31:22.831317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.257 [2024-12-10 14:31:22.831330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.257 qpair failed and we were unable to recover it. 00:29:22.257 [2024-12-10 14:31:22.841391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.257 [2024-12-10 14:31:22.841499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.257 [2024-12-10 14:31:22.841515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.257 [2024-12-10 14:31:22.841523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.257 [2024-12-10 14:31:22.841529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.257 [2024-12-10 14:31:22.841543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.257 qpair failed and we were unable to recover it. 00:29:22.257 [2024-12-10 14:31:22.851383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.257 [2024-12-10 14:31:22.851437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.257 [2024-12-10 14:31:22.851449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.257 [2024-12-10 14:31:22.851456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.257 [2024-12-10 14:31:22.851463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.257 [2024-12-10 14:31:22.851478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.257 qpair failed and we were unable to recover it. 00:29:22.257 [2024-12-10 14:31:22.861348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.257 [2024-12-10 14:31:22.861414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.257 [2024-12-10 14:31:22.861427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.257 [2024-12-10 14:31:22.861434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.257 [2024-12-10 14:31:22.861441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.257 [2024-12-10 14:31:22.861455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.257 qpair failed and we were unable to recover it. 00:29:22.257 [2024-12-10 14:31:22.871421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.257 [2024-12-10 14:31:22.871475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.257 [2024-12-10 14:31:22.871488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.257 [2024-12-10 14:31:22.871495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.257 [2024-12-10 14:31:22.871501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.257 [2024-12-10 14:31:22.871516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.257 qpair failed and we were unable to recover it. 00:29:22.257 [2024-12-10 14:31:22.881485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.257 [2024-12-10 14:31:22.881541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.257 [2024-12-10 14:31:22.881554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.257 [2024-12-10 14:31:22.881561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.257 [2024-12-10 14:31:22.881571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.257 [2024-12-10 14:31:22.881585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.257 qpair failed and we were unable to recover it. 00:29:22.257 [2024-12-10 14:31:22.891470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.257 [2024-12-10 14:31:22.891522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.257 [2024-12-10 14:31:22.891535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.257 [2024-12-10 14:31:22.891542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.257 [2024-12-10 14:31:22.891549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.257 [2024-12-10 14:31:22.891563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.257 qpair failed and we were unable to recover it. 00:29:22.257 [2024-12-10 14:31:22.901505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.257 [2024-12-10 14:31:22.901559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.257 [2024-12-10 14:31:22.901572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.257 [2024-12-10 14:31:22.901579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.257 [2024-12-10 14:31:22.901586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.257 [2024-12-10 14:31:22.901600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.257 qpair failed and we were unable to recover it. 00:29:22.257 [2024-12-10 14:31:22.911528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.257 [2024-12-10 14:31:22.911581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.257 [2024-12-10 14:31:22.911594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.257 [2024-12-10 14:31:22.911601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.257 [2024-12-10 14:31:22.911607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.257 [2024-12-10 14:31:22.911622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.257 qpair failed and we were unable to recover it. 00:29:22.257 [2024-12-10 14:31:22.921564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.257 [2024-12-10 14:31:22.921653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.257 [2024-12-10 14:31:22.921666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.257 [2024-12-10 14:31:22.921673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.257 [2024-12-10 14:31:22.921679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.257 [2024-12-10 14:31:22.921693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.257 qpair failed and we were unable to recover it. 00:29:22.257 [2024-12-10 14:31:22.931615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.257 [2024-12-10 14:31:22.931684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.257 [2024-12-10 14:31:22.931697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.257 [2024-12-10 14:31:22.931705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.257 [2024-12-10 14:31:22.931711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.257 [2024-12-10 14:31:22.931725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.257 qpair failed and we were unable to recover it. 00:29:22.257 [2024-12-10 14:31:22.941613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.257 [2024-12-10 14:31:22.941665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.257 [2024-12-10 14:31:22.941678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.257 [2024-12-10 14:31:22.941685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.257 [2024-12-10 14:31:22.941691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.257 [2024-12-10 14:31:22.941706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.257 qpair failed and we were unable to recover it. 00:29:22.257 [2024-12-10 14:31:22.951644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.257 [2024-12-10 14:31:22.951727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.257 [2024-12-10 14:31:22.951741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.257 [2024-12-10 14:31:22.951748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.258 [2024-12-10 14:31:22.951754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.258 [2024-12-10 14:31:22.951769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.258 qpair failed and we were unable to recover it. 00:29:22.258 [2024-12-10 14:31:22.961694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.258 [2024-12-10 14:31:22.961753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.258 [2024-12-10 14:31:22.961766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.258 [2024-12-10 14:31:22.961773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.258 [2024-12-10 14:31:22.961779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.258 [2024-12-10 14:31:22.961793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.258 qpair failed and we were unable to recover it. 00:29:22.258 [2024-12-10 14:31:22.971733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.258 [2024-12-10 14:31:22.971797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.258 [2024-12-10 14:31:22.971813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.258 [2024-12-10 14:31:22.971820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.258 [2024-12-10 14:31:22.971827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.258 [2024-12-10 14:31:22.971841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.258 qpair failed and we were unable to recover it. 00:29:22.258 [2024-12-10 14:31:22.981730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.258 [2024-12-10 14:31:22.981787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.258 [2024-12-10 14:31:22.981800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.258 [2024-12-10 14:31:22.981806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.258 [2024-12-10 14:31:22.981813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.258 [2024-12-10 14:31:22.981827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.258 qpair failed and we were unable to recover it. 00:29:22.258 [2024-12-10 14:31:22.991759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.258 [2024-12-10 14:31:22.991814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.258 [2024-12-10 14:31:22.991827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.258 [2024-12-10 14:31:22.991834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.258 [2024-12-10 14:31:22.991841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.258 [2024-12-10 14:31:22.991855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.258 qpair failed and we were unable to recover it. 00:29:22.516 [2024-12-10 14:31:23.001823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.516 [2024-12-10 14:31:23.001928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.516 [2024-12-10 14:31:23.001941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.516 [2024-12-10 14:31:23.001948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.516 [2024-12-10 14:31:23.001954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.516 [2024-12-10 14:31:23.001969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.516 qpair failed and we were unable to recover it. 00:29:22.516 [2024-12-10 14:31:23.011818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.516 [2024-12-10 14:31:23.011874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.516 [2024-12-10 14:31:23.011887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.516 [2024-12-10 14:31:23.011894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.516 [2024-12-10 14:31:23.011903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.516 [2024-12-10 14:31:23.011919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.516 qpair failed and we were unable to recover it. 00:29:22.516 [2024-12-10 14:31:23.021876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.516 [2024-12-10 14:31:23.021936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.516 [2024-12-10 14:31:23.021949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.516 [2024-12-10 14:31:23.021956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.516 [2024-12-10 14:31:23.021962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.516 [2024-12-10 14:31:23.021978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.516 qpair failed and we were unable to recover it. 00:29:22.516 [2024-12-10 14:31:23.031886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.516 [2024-12-10 14:31:23.031969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.516 [2024-12-10 14:31:23.031983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.516 [2024-12-10 14:31:23.031990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.516 [2024-12-10 14:31:23.031996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.516 [2024-12-10 14:31:23.032010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.516 qpair failed and we were unable to recover it. 00:29:22.516 [2024-12-10 14:31:23.041937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.516 [2024-12-10 14:31:23.041995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.516 [2024-12-10 14:31:23.042008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.516 [2024-12-10 14:31:23.042015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.516 [2024-12-10 14:31:23.042021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.516 [2024-12-10 14:31:23.042036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.516 qpair failed and we were unable to recover it. 00:29:22.516 [2024-12-10 14:31:23.051946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.516 [2024-12-10 14:31:23.052005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.516 [2024-12-10 14:31:23.052018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.516 [2024-12-10 14:31:23.052025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.516 [2024-12-10 14:31:23.052032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.516 [2024-12-10 14:31:23.052046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.516 qpair failed and we were unable to recover it. 00:29:22.516 [2024-12-10 14:31:23.061975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.516 [2024-12-10 14:31:23.062029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.516 [2024-12-10 14:31:23.062042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.516 [2024-12-10 14:31:23.062049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.516 [2024-12-10 14:31:23.062055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.516 [2024-12-10 14:31:23.062069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.516 qpair failed and we were unable to recover it. 00:29:22.516 [2024-12-10 14:31:23.072034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.516 [2024-12-10 14:31:23.072131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.516 [2024-12-10 14:31:23.072145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.516 [2024-12-10 14:31:23.072151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.516 [2024-12-10 14:31:23.072157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.516 [2024-12-10 14:31:23.072172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.516 qpair failed and we were unable to recover it. 00:29:22.516 [2024-12-10 14:31:23.082029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.516 [2024-12-10 14:31:23.082084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.516 [2024-12-10 14:31:23.082096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.516 [2024-12-10 14:31:23.082103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.516 [2024-12-10 14:31:23.082109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.516 [2024-12-10 14:31:23.082124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.516 qpair failed and we were unable to recover it. 00:29:22.516 [2024-12-10 14:31:23.092049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.516 [2024-12-10 14:31:23.092102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.516 [2024-12-10 14:31:23.092115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.516 [2024-12-10 14:31:23.092122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.516 [2024-12-10 14:31:23.092129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.516 [2024-12-10 14:31:23.092144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.516 qpair failed and we were unable to recover it. 00:29:22.516 [2024-12-10 14:31:23.102083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.516 [2024-12-10 14:31:23.102133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.516 [2024-12-10 14:31:23.102148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.516 [2024-12-10 14:31:23.102155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.516 [2024-12-10 14:31:23.102162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.516 [2024-12-10 14:31:23.102176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.516 qpair failed and we were unable to recover it. 00:29:22.516 [2024-12-10 14:31:23.112126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.516 [2024-12-10 14:31:23.112208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.517 [2024-12-10 14:31:23.112225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.517 [2024-12-10 14:31:23.112232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.517 [2024-12-10 14:31:23.112238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.517 [2024-12-10 14:31:23.112253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.517 qpair failed and we were unable to recover it. 00:29:22.517 [2024-12-10 14:31:23.122147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.517 [2024-12-10 14:31:23.122205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.517 [2024-12-10 14:31:23.122221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.517 [2024-12-10 14:31:23.122229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.517 [2024-12-10 14:31:23.122235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.517 [2024-12-10 14:31:23.122250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.517 qpair failed and we were unable to recover it. 00:29:22.517 [2024-12-10 14:31:23.132157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.517 [2024-12-10 14:31:23.132215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.517 [2024-12-10 14:31:23.132232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.517 [2024-12-10 14:31:23.132239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.517 [2024-12-10 14:31:23.132246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.517 [2024-12-10 14:31:23.132261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.517 qpair failed and we were unable to recover it. 00:29:22.517 [2024-12-10 14:31:23.142234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.517 [2024-12-10 14:31:23.142286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.517 [2024-12-10 14:31:23.142299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.517 [2024-12-10 14:31:23.142309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.517 [2024-12-10 14:31:23.142316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.517 [2024-12-10 14:31:23.142330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.517 qpair failed and we were unable to recover it. 00:29:22.517 [2024-12-10 14:31:23.152204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.517 [2024-12-10 14:31:23.152290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.517 [2024-12-10 14:31:23.152304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.517 [2024-12-10 14:31:23.152310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.517 [2024-12-10 14:31:23.152316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.517 [2024-12-10 14:31:23.152331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.517 qpair failed and we were unable to recover it. 00:29:22.517 [2024-12-10 14:31:23.162299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.517 [2024-12-10 14:31:23.162358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.517 [2024-12-10 14:31:23.162371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.517 [2024-12-10 14:31:23.162378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.517 [2024-12-10 14:31:23.162384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.517 [2024-12-10 14:31:23.162399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.517 qpair failed and we were unable to recover it. 00:29:22.517 [2024-12-10 14:31:23.172305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.517 [2024-12-10 14:31:23.172369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.517 [2024-12-10 14:31:23.172382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.517 [2024-12-10 14:31:23.172389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.517 [2024-12-10 14:31:23.172395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.517 [2024-12-10 14:31:23.172411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.517 qpair failed and we were unable to recover it. 00:29:22.517 [2024-12-10 14:31:23.182324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.517 [2024-12-10 14:31:23.182378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.517 [2024-12-10 14:31:23.182391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.517 [2024-12-10 14:31:23.182398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.517 [2024-12-10 14:31:23.182404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.517 [2024-12-10 14:31:23.182419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.517 qpair failed and we were unable to recover it. 00:29:22.517 [2024-12-10 14:31:23.192367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.517 [2024-12-10 14:31:23.192446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.517 [2024-12-10 14:31:23.192459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.517 [2024-12-10 14:31:23.192466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.517 [2024-12-10 14:31:23.192472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.517 [2024-12-10 14:31:23.192487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.517 qpair failed and we were unable to recover it. 00:29:22.517 [2024-12-10 14:31:23.202372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.517 [2024-12-10 14:31:23.202462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.517 [2024-12-10 14:31:23.202475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.517 [2024-12-10 14:31:23.202483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.517 [2024-12-10 14:31:23.202489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.517 [2024-12-10 14:31:23.202504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.517 qpair failed and we were unable to recover it. 00:29:22.517 [2024-12-10 14:31:23.212399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.517 [2024-12-10 14:31:23.212453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.517 [2024-12-10 14:31:23.212467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.517 [2024-12-10 14:31:23.212474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.517 [2024-12-10 14:31:23.212481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.517 [2024-12-10 14:31:23.212495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.517 qpair failed and we were unable to recover it. 00:29:22.517 [2024-12-10 14:31:23.222442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.517 [2024-12-10 14:31:23.222493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.517 [2024-12-10 14:31:23.222506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.517 [2024-12-10 14:31:23.222513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.517 [2024-12-10 14:31:23.222519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.517 [2024-12-10 14:31:23.222534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.517 qpair failed and we were unable to recover it. 00:29:22.517 [2024-12-10 14:31:23.232450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.517 [2024-12-10 14:31:23.232510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.517 [2024-12-10 14:31:23.232523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.517 [2024-12-10 14:31:23.232530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.517 [2024-12-10 14:31:23.232537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.517 [2024-12-10 14:31:23.232551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.517 qpair failed and we were unable to recover it. 00:29:22.517 [2024-12-10 14:31:23.242475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.517 [2024-12-10 14:31:23.242553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.517 [2024-12-10 14:31:23.242567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.517 [2024-12-10 14:31:23.242574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.517 [2024-12-10 14:31:23.242580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.518 [2024-12-10 14:31:23.242594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.518 qpair failed and we were unable to recover it. 00:29:22.518 [2024-12-10 14:31:23.252530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.518 [2024-12-10 14:31:23.252587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.518 [2024-12-10 14:31:23.252601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.518 [2024-12-10 14:31:23.252608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.518 [2024-12-10 14:31:23.252614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.518 [2024-12-10 14:31:23.252629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.518 qpair failed and we were unable to recover it. 00:29:22.775 [2024-12-10 14:31:23.262580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.776 [2024-12-10 14:31:23.262636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.776 [2024-12-10 14:31:23.262650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.776 [2024-12-10 14:31:23.262657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.776 [2024-12-10 14:31:23.262664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.776 [2024-12-10 14:31:23.262679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-12-10 14:31:23.272583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.776 [2024-12-10 14:31:23.272637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.776 [2024-12-10 14:31:23.272650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.776 [2024-12-10 14:31:23.272661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.776 [2024-12-10 14:31:23.272668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.776 [2024-12-10 14:31:23.272683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-12-10 14:31:23.282599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.776 [2024-12-10 14:31:23.282657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.776 [2024-12-10 14:31:23.282670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.776 [2024-12-10 14:31:23.282677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.776 [2024-12-10 14:31:23.282684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.776 [2024-12-10 14:31:23.282698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-12-10 14:31:23.292672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.776 [2024-12-10 14:31:23.292727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.776 [2024-12-10 14:31:23.292740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.776 [2024-12-10 14:31:23.292747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.776 [2024-12-10 14:31:23.292754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.776 [2024-12-10 14:31:23.292768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-12-10 14:31:23.302683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.776 [2024-12-10 14:31:23.302733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.776 [2024-12-10 14:31:23.302746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.776 [2024-12-10 14:31:23.302754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.776 [2024-12-10 14:31:23.302760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.776 [2024-12-10 14:31:23.302775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-12-10 14:31:23.312672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.776 [2024-12-10 14:31:23.312746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.776 [2024-12-10 14:31:23.312759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.776 [2024-12-10 14:31:23.312766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.776 [2024-12-10 14:31:23.312772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.776 [2024-12-10 14:31:23.312790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-12-10 14:31:23.322700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.776 [2024-12-10 14:31:23.322756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.776 [2024-12-10 14:31:23.322770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.776 [2024-12-10 14:31:23.322777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.776 [2024-12-10 14:31:23.322783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.776 [2024-12-10 14:31:23.322798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-12-10 14:31:23.332741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.776 [2024-12-10 14:31:23.332803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.776 [2024-12-10 14:31:23.332816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.776 [2024-12-10 14:31:23.332823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.776 [2024-12-10 14:31:23.332829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.776 [2024-12-10 14:31:23.332843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-12-10 14:31:23.342795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.776 [2024-12-10 14:31:23.342852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.776 [2024-12-10 14:31:23.342865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.776 [2024-12-10 14:31:23.342874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.776 [2024-12-10 14:31:23.342881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.776 [2024-12-10 14:31:23.342897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-12-10 14:31:23.352774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.776 [2024-12-10 14:31:23.352826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.776 [2024-12-10 14:31:23.352839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.776 [2024-12-10 14:31:23.352846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.776 [2024-12-10 14:31:23.352853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.776 [2024-12-10 14:31:23.352867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-12-10 14:31:23.362797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.776 [2024-12-10 14:31:23.362854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.776 [2024-12-10 14:31:23.362867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.776 [2024-12-10 14:31:23.362874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.776 [2024-12-10 14:31:23.362880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.776 [2024-12-10 14:31:23.362895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-12-10 14:31:23.372842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.776 [2024-12-10 14:31:23.372898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.776 [2024-12-10 14:31:23.372911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.776 [2024-12-10 14:31:23.372918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.776 [2024-12-10 14:31:23.372924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.776 [2024-12-10 14:31:23.372939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-12-10 14:31:23.382875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.776 [2024-12-10 14:31:23.382929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.776 [2024-12-10 14:31:23.382942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.776 [2024-12-10 14:31:23.382949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.776 [2024-12-10 14:31:23.382955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.776 [2024-12-10 14:31:23.382970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-12-10 14:31:23.392930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.776 [2024-12-10 14:31:23.392984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.776 [2024-12-10 14:31:23.392997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.776 [2024-12-10 14:31:23.393004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.777 [2024-12-10 14:31:23.393010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.777 [2024-12-10 14:31:23.393025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-12-10 14:31:23.402945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.777 [2024-12-10 14:31:23.402999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.777 [2024-12-10 14:31:23.403017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.777 [2024-12-10 14:31:23.403024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.777 [2024-12-10 14:31:23.403030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.777 [2024-12-10 14:31:23.403045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-12-10 14:31:23.412954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.777 [2024-12-10 14:31:23.413009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.777 [2024-12-10 14:31:23.413022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.777 [2024-12-10 14:31:23.413029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.777 [2024-12-10 14:31:23.413035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.777 [2024-12-10 14:31:23.413050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-12-10 14:31:23.422983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.777 [2024-12-10 14:31:23.423043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.777 [2024-12-10 14:31:23.423056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.777 [2024-12-10 14:31:23.423063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.777 [2024-12-10 14:31:23.423070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.777 [2024-12-10 14:31:23.423085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-12-10 14:31:23.432999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.777 [2024-12-10 14:31:23.433056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.777 [2024-12-10 14:31:23.433069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.777 [2024-12-10 14:31:23.433076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.777 [2024-12-10 14:31:23.433082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.777 [2024-12-10 14:31:23.433097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-12-10 14:31:23.443044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.777 [2024-12-10 14:31:23.443101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.777 [2024-12-10 14:31:23.443114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.777 [2024-12-10 14:31:23.443121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.777 [2024-12-10 14:31:23.443130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.777 [2024-12-10 14:31:23.443145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-12-10 14:31:23.453139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.777 [2024-12-10 14:31:23.453227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.777 [2024-12-10 14:31:23.453240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.777 [2024-12-10 14:31:23.453248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.777 [2024-12-10 14:31:23.453255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.777 [2024-12-10 14:31:23.453269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-12-10 14:31:23.463093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.777 [2024-12-10 14:31:23.463146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.777 [2024-12-10 14:31:23.463160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.777 [2024-12-10 14:31:23.463167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.777 [2024-12-10 14:31:23.463173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.777 [2024-12-10 14:31:23.463188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-12-10 14:31:23.473112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.777 [2024-12-10 14:31:23.473166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.777 [2024-12-10 14:31:23.473179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.777 [2024-12-10 14:31:23.473186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.777 [2024-12-10 14:31:23.473193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.777 [2024-12-10 14:31:23.473208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-12-10 14:31:23.483174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.777 [2024-12-10 14:31:23.483257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.777 [2024-12-10 14:31:23.483271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.777 [2024-12-10 14:31:23.483278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.777 [2024-12-10 14:31:23.483284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.777 [2024-12-10 14:31:23.483299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-12-10 14:31:23.493261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.777 [2024-12-10 14:31:23.493318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.777 [2024-12-10 14:31:23.493332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.777 [2024-12-10 14:31:23.493339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.777 [2024-12-10 14:31:23.493345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.777 [2024-12-10 14:31:23.493361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-12-10 14:31:23.503171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.777 [2024-12-10 14:31:23.503224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.777 [2024-12-10 14:31:23.503237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.777 [2024-12-10 14:31:23.503244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.777 [2024-12-10 14:31:23.503250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.777 [2024-12-10 14:31:23.503265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-12-10 14:31:23.513234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.777 [2024-12-10 14:31:23.513286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.777 [2024-12-10 14:31:23.513300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.777 [2024-12-10 14:31:23.513307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.777 [2024-12-10 14:31:23.513313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:22.777 [2024-12-10 14:31:23.513328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.777 qpair failed and we were unable to recover it. 00:29:23.036 [2024-12-10 14:31:23.523300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.036 [2024-12-10 14:31:23.523401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.036 [2024-12-10 14:31:23.523414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.036 [2024-12-10 14:31:23.523421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.036 [2024-12-10 14:31:23.523427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.036 [2024-12-10 14:31:23.523441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-12-10 14:31:23.533297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.036 [2024-12-10 14:31:23.533348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.036 [2024-12-10 14:31:23.533364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.036 [2024-12-10 14:31:23.533371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.036 [2024-12-10 14:31:23.533378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.036 [2024-12-10 14:31:23.533392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-12-10 14:31:23.543347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.036 [2024-12-10 14:31:23.543408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.036 [2024-12-10 14:31:23.543421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.036 [2024-12-10 14:31:23.543428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.036 [2024-12-10 14:31:23.543434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.036 [2024-12-10 14:31:23.543449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-12-10 14:31:23.553309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.036 [2024-12-10 14:31:23.553385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.036 [2024-12-10 14:31:23.553398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.036 [2024-12-10 14:31:23.553406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.036 [2024-12-10 14:31:23.553412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.036 [2024-12-10 14:31:23.553426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-12-10 14:31:23.563412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.036 [2024-12-10 14:31:23.563479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.036 [2024-12-10 14:31:23.563492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.036 [2024-12-10 14:31:23.563500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.036 [2024-12-10 14:31:23.563506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.036 [2024-12-10 14:31:23.563521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-12-10 14:31:23.573412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.036 [2024-12-10 14:31:23.573467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.036 [2024-12-10 14:31:23.573480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.036 [2024-12-10 14:31:23.573487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.036 [2024-12-10 14:31:23.573496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.036 [2024-12-10 14:31:23.573511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-12-10 14:31:23.583438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.036 [2024-12-10 14:31:23.583508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.036 [2024-12-10 14:31:23.583521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.036 [2024-12-10 14:31:23.583528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.036 [2024-12-10 14:31:23.583534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.036 [2024-12-10 14:31:23.583549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-12-10 14:31:23.593474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.036 [2024-12-10 14:31:23.593550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.036 [2024-12-10 14:31:23.593563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.036 [2024-12-10 14:31:23.593571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.036 [2024-12-10 14:31:23.593577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.036 [2024-12-10 14:31:23.593591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-12-10 14:31:23.603488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.036 [2024-12-10 14:31:23.603544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.036 [2024-12-10 14:31:23.603557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.036 [2024-12-10 14:31:23.603564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.036 [2024-12-10 14:31:23.603570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.036 [2024-12-10 14:31:23.603585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-12-10 14:31:23.613513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.036 [2024-12-10 14:31:23.613566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.036 [2024-12-10 14:31:23.613578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.036 [2024-12-10 14:31:23.613586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.036 [2024-12-10 14:31:23.613591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.036 [2024-12-10 14:31:23.613606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-12-10 14:31:23.623530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.037 [2024-12-10 14:31:23.623581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.037 [2024-12-10 14:31:23.623594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.037 [2024-12-10 14:31:23.623601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.037 [2024-12-10 14:31:23.623607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.037 [2024-12-10 14:31:23.623622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-12-10 14:31:23.633561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.037 [2024-12-10 14:31:23.633613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.037 [2024-12-10 14:31:23.633626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.037 [2024-12-10 14:31:23.633633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.037 [2024-12-10 14:31:23.633639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.037 [2024-12-10 14:31:23.633654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-12-10 14:31:23.643642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.037 [2024-12-10 14:31:23.643700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.037 [2024-12-10 14:31:23.643713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.037 [2024-12-10 14:31:23.643720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.037 [2024-12-10 14:31:23.643727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.037 [2024-12-10 14:31:23.643742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-12-10 14:31:23.653669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.037 [2024-12-10 14:31:23.653735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.037 [2024-12-10 14:31:23.653748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.037 [2024-12-10 14:31:23.653755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.037 [2024-12-10 14:31:23.653762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.037 [2024-12-10 14:31:23.653776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-12-10 14:31:23.663679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.037 [2024-12-10 14:31:23.663738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.037 [2024-12-10 14:31:23.663755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.037 [2024-12-10 14:31:23.663762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.037 [2024-12-10 14:31:23.663768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.037 [2024-12-10 14:31:23.663782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-12-10 14:31:23.673681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.037 [2024-12-10 14:31:23.673736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.037 [2024-12-10 14:31:23.673748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.037 [2024-12-10 14:31:23.673755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.037 [2024-12-10 14:31:23.673762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.037 [2024-12-10 14:31:23.673776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-12-10 14:31:23.683719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.037 [2024-12-10 14:31:23.683778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.037 [2024-12-10 14:31:23.683790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.037 [2024-12-10 14:31:23.683798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.037 [2024-12-10 14:31:23.683804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.037 [2024-12-10 14:31:23.683819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-12-10 14:31:23.693735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.037 [2024-12-10 14:31:23.693789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.037 [2024-12-10 14:31:23.693801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.037 [2024-12-10 14:31:23.693808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.037 [2024-12-10 14:31:23.693814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.037 [2024-12-10 14:31:23.693829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-12-10 14:31:23.703758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.037 [2024-12-10 14:31:23.703813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.037 [2024-12-10 14:31:23.703825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.037 [2024-12-10 14:31:23.703835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.037 [2024-12-10 14:31:23.703842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.037 [2024-12-10 14:31:23.703857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-12-10 14:31:23.713717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.037 [2024-12-10 14:31:23.713810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.037 [2024-12-10 14:31:23.713824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.037 [2024-12-10 14:31:23.713831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.037 [2024-12-10 14:31:23.713837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.037 [2024-12-10 14:31:23.713852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-12-10 14:31:23.723825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.037 [2024-12-10 14:31:23.723923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.037 [2024-12-10 14:31:23.723936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.037 [2024-12-10 14:31:23.723943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.037 [2024-12-10 14:31:23.723949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.037 [2024-12-10 14:31:23.723964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-12-10 14:31:23.733851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.037 [2024-12-10 14:31:23.733902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.037 [2024-12-10 14:31:23.733915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.037 [2024-12-10 14:31:23.733922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.037 [2024-12-10 14:31:23.733928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.037 [2024-12-10 14:31:23.733943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-12-10 14:31:23.743880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.037 [2024-12-10 14:31:23.743930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.037 [2024-12-10 14:31:23.743943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.037 [2024-12-10 14:31:23.743950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.037 [2024-12-10 14:31:23.743956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.037 [2024-12-10 14:31:23.743971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-12-10 14:31:23.753902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.037 [2024-12-10 14:31:23.753951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.038 [2024-12-10 14:31:23.753964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.038 [2024-12-10 14:31:23.753971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.038 [2024-12-10 14:31:23.753977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.038 [2024-12-10 14:31:23.753992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-12-10 14:31:23.763913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.038 [2024-12-10 14:31:23.763967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.038 [2024-12-10 14:31:23.763979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.038 [2024-12-10 14:31:23.763986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.038 [2024-12-10 14:31:23.763992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.038 [2024-12-10 14:31:23.764007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-12-10 14:31:23.773967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.038 [2024-12-10 14:31:23.774023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.038 [2024-12-10 14:31:23.774036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.038 [2024-12-10 14:31:23.774042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.038 [2024-12-10 14:31:23.774048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.038 [2024-12-10 14:31:23.774063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.295 [2024-12-10 14:31:23.784001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.295 [2024-12-10 14:31:23.784059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.295 [2024-12-10 14:31:23.784072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.296 [2024-12-10 14:31:23.784080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.296 [2024-12-10 14:31:23.784085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.296 [2024-12-10 14:31:23.784100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.296 qpair failed and we were unable to recover it. 00:29:23.296 [2024-12-10 14:31:23.794025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.296 [2024-12-10 14:31:23.794098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.296 [2024-12-10 14:31:23.794111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.296 [2024-12-10 14:31:23.794118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.296 [2024-12-10 14:31:23.794124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.296 [2024-12-10 14:31:23.794139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.296 qpair failed and we were unable to recover it. 00:29:23.296 [2024-12-10 14:31:23.804049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.296 [2024-12-10 14:31:23.804104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.296 [2024-12-10 14:31:23.804117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.296 [2024-12-10 14:31:23.804124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.296 [2024-12-10 14:31:23.804130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.296 [2024-12-10 14:31:23.804144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.296 qpair failed and we were unable to recover it. 00:29:23.296 [2024-12-10 14:31:23.814042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.296 [2024-12-10 14:31:23.814093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.296 [2024-12-10 14:31:23.814106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.296 [2024-12-10 14:31:23.814113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.296 [2024-12-10 14:31:23.814119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.296 [2024-12-10 14:31:23.814134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.296 qpair failed and we were unable to recover it. 00:29:23.296 [2024-12-10 14:31:23.824091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.296 [2024-12-10 14:31:23.824157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.296 [2024-12-10 14:31:23.824170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.296 [2024-12-10 14:31:23.824178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.296 [2024-12-10 14:31:23.824184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.296 [2024-12-10 14:31:23.824198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.296 qpair failed and we were unable to recover it. 00:29:23.296 [2024-12-10 14:31:23.834146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.296 [2024-12-10 14:31:23.834230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.296 [2024-12-10 14:31:23.834244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.296 [2024-12-10 14:31:23.834253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.296 [2024-12-10 14:31:23.834260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.296 [2024-12-10 14:31:23.834274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.296 qpair failed and we were unable to recover it. 00:29:23.296 [2024-12-10 14:31:23.844106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.296 [2024-12-10 14:31:23.844158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.296 [2024-12-10 14:31:23.844171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.296 [2024-12-10 14:31:23.844178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.296 [2024-12-10 14:31:23.844184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.296 [2024-12-10 14:31:23.844200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.296 qpair failed and we were unable to recover it. 00:29:23.296 [2024-12-10 14:31:23.854195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.296 [2024-12-10 14:31:23.854279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.296 [2024-12-10 14:31:23.854293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.296 [2024-12-10 14:31:23.854300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.296 [2024-12-10 14:31:23.854306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.296 [2024-12-10 14:31:23.854321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.296 qpair failed and we were unable to recover it. 00:29:23.296 [2024-12-10 14:31:23.864188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.296 [2024-12-10 14:31:23.864253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.296 [2024-12-10 14:31:23.864266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.296 [2024-12-10 14:31:23.864273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.296 [2024-12-10 14:31:23.864279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.296 [2024-12-10 14:31:23.864294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.296 qpair failed and we were unable to recover it. 00:29:23.296 [2024-12-10 14:31:23.874229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.296 [2024-12-10 14:31:23.874285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.296 [2024-12-10 14:31:23.874297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.296 [2024-12-10 14:31:23.874304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.296 [2024-12-10 14:31:23.874311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.296 [2024-12-10 14:31:23.874328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.296 qpair failed and we were unable to recover it. 00:29:23.296 [2024-12-10 14:31:23.884267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.296 [2024-12-10 14:31:23.884324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.296 [2024-12-10 14:31:23.884337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.296 [2024-12-10 14:31:23.884344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.296 [2024-12-10 14:31:23.884351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.296 [2024-12-10 14:31:23.884365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.296 qpair failed and we were unable to recover it. 00:29:23.296 [2024-12-10 14:31:23.894267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.296 [2024-12-10 14:31:23.894349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.296 [2024-12-10 14:31:23.894363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.296 [2024-12-10 14:31:23.894370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.296 [2024-12-10 14:31:23.894376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.296 [2024-12-10 14:31:23.894391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.296 qpair failed and we were unable to recover it. 00:29:23.296 [2024-12-10 14:31:23.904308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.296 [2024-12-10 14:31:23.904364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.296 [2024-12-10 14:31:23.904378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.296 [2024-12-10 14:31:23.904385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.296 [2024-12-10 14:31:23.904391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.296 [2024-12-10 14:31:23.904406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.296 qpair failed and we were unable to recover it. 00:29:23.296 [2024-12-10 14:31:23.914253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.296 [2024-12-10 14:31:23.914308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.296 [2024-12-10 14:31:23.914324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.297 [2024-12-10 14:31:23.914331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.297 [2024-12-10 14:31:23.914337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.297 [2024-12-10 14:31:23.914353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.297 qpair failed and we were unable to recover it. 00:29:23.297 [2024-12-10 14:31:23.924304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.297 [2024-12-10 14:31:23.924377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.297 [2024-12-10 14:31:23.924390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.297 [2024-12-10 14:31:23.924398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.297 [2024-12-10 14:31:23.924405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.297 [2024-12-10 14:31:23.924420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.297 qpair failed and we were unable to recover it. 00:29:23.297 [2024-12-10 14:31:23.934381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.297 [2024-12-10 14:31:23.934479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.297 [2024-12-10 14:31:23.934493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.297 [2024-12-10 14:31:23.934501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.297 [2024-12-10 14:31:23.934507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.297 [2024-12-10 14:31:23.934523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.297 qpair failed and we were unable to recover it. 00:29:23.297 [2024-12-10 14:31:23.944440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.297 [2024-12-10 14:31:23.944492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.297 [2024-12-10 14:31:23.944506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.297 [2024-12-10 14:31:23.944514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.297 [2024-12-10 14:31:23.944520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.297 [2024-12-10 14:31:23.944535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.297 qpair failed and we were unable to recover it. 00:29:23.297 [2024-12-10 14:31:23.954384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.297 [2024-12-10 14:31:23.954483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.297 [2024-12-10 14:31:23.954496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.297 [2024-12-10 14:31:23.954503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.297 [2024-12-10 14:31:23.954510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.297 [2024-12-10 14:31:23.954523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.297 qpair failed and we were unable to recover it. 00:29:23.297 [2024-12-10 14:31:23.964457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.297 [2024-12-10 14:31:23.964513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.297 [2024-12-10 14:31:23.964528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.297 [2024-12-10 14:31:23.964536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.297 [2024-12-10 14:31:23.964542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.297 [2024-12-10 14:31:23.964556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.297 qpair failed and we were unable to recover it. 00:29:23.297 [2024-12-10 14:31:23.974491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.297 [2024-12-10 14:31:23.974558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.297 [2024-12-10 14:31:23.974571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.297 [2024-12-10 14:31:23.974578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.297 [2024-12-10 14:31:23.974584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.297 [2024-12-10 14:31:23.974599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.297 qpair failed and we were unable to recover it. 00:29:23.297 [2024-12-10 14:31:23.984541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.297 [2024-12-10 14:31:23.984594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.297 [2024-12-10 14:31:23.984607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.297 [2024-12-10 14:31:23.984614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.297 [2024-12-10 14:31:23.984620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.297 [2024-12-10 14:31:23.984634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.297 qpair failed and we were unable to recover it. 00:29:23.297 [2024-12-10 14:31:23.994487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.297 [2024-12-10 14:31:23.994540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.297 [2024-12-10 14:31:23.994553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.297 [2024-12-10 14:31:23.994560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.297 [2024-12-10 14:31:23.994566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.297 [2024-12-10 14:31:23.994580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.297 qpair failed and we were unable to recover it. 00:29:23.297 [2024-12-10 14:31:24.004689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.297 [2024-12-10 14:31:24.004760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.297 [2024-12-10 14:31:24.004773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.297 [2024-12-10 14:31:24.004780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.297 [2024-12-10 14:31:24.004790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.297 [2024-12-10 14:31:24.004804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.297 qpair failed and we were unable to recover it. 00:29:23.297 [2024-12-10 14:31:24.014606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.297 [2024-12-10 14:31:24.014707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.297 [2024-12-10 14:31:24.014720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.297 [2024-12-10 14:31:24.014727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.297 [2024-12-10 14:31:24.014733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.297 [2024-12-10 14:31:24.014748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.297 qpair failed and we were unable to recover it. 00:29:23.297 [2024-12-10 14:31:24.024587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.297 [2024-12-10 14:31:24.024688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.297 [2024-12-10 14:31:24.024701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.297 [2024-12-10 14:31:24.024708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.297 [2024-12-10 14:31:24.024714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.297 [2024-12-10 14:31:24.024729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.297 qpair failed and we were unable to recover it. 00:29:23.556 [2024-12-10 14:31:24.034656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.556 [2024-12-10 14:31:24.034727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.556 [2024-12-10 14:31:24.034740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.556 [2024-12-10 14:31:24.034747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.556 [2024-12-10 14:31:24.034753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.556 [2024-12-10 14:31:24.034768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.556 qpair failed and we were unable to recover it. 00:29:23.556 [2024-12-10 14:31:24.044639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.556 [2024-12-10 14:31:24.044692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.556 [2024-12-10 14:31:24.044705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.556 [2024-12-10 14:31:24.044712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.556 [2024-12-10 14:31:24.044718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.556 [2024-12-10 14:31:24.044733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.556 qpair failed and we were unable to recover it. 00:29:23.556 [2024-12-10 14:31:24.054731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.556 [2024-12-10 14:31:24.054785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.556 [2024-12-10 14:31:24.054797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.556 [2024-12-10 14:31:24.054804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.556 [2024-12-10 14:31:24.054811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.556 [2024-12-10 14:31:24.054825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.556 qpair failed and we were unable to recover it. 00:29:23.556 [2024-12-10 14:31:24.064790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.556 [2024-12-10 14:31:24.064854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.556 [2024-12-10 14:31:24.064867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.556 [2024-12-10 14:31:24.064874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.556 [2024-12-10 14:31:24.064880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.556 [2024-12-10 14:31:24.064895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.556 qpair failed and we were unable to recover it. 00:29:23.556 [2024-12-10 14:31:24.074781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.556 [2024-12-10 14:31:24.074896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.556 [2024-12-10 14:31:24.074912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.556 [2024-12-10 14:31:24.074919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.556 [2024-12-10 14:31:24.074925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.556 [2024-12-10 14:31:24.074940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.556 qpair failed and we were unable to recover it. 00:29:23.556 [2024-12-10 14:31:24.084752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.556 [2024-12-10 14:31:24.084808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.556 [2024-12-10 14:31:24.084820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.556 [2024-12-10 14:31:24.084827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.556 [2024-12-10 14:31:24.084833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.556 [2024-12-10 14:31:24.084848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.556 qpair failed and we were unable to recover it. 00:29:23.556 [2024-12-10 14:31:24.094852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.556 [2024-12-10 14:31:24.094908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.556 [2024-12-10 14:31:24.094926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.556 [2024-12-10 14:31:24.094933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.556 [2024-12-10 14:31:24.094940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.556 [2024-12-10 14:31:24.094954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.556 qpair failed and we were unable to recover it. 00:29:23.556 [2024-12-10 14:31:24.104914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.556 [2024-12-10 14:31:24.104971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.556 [2024-12-10 14:31:24.104984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.556 [2024-12-10 14:31:24.104991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.556 [2024-12-10 14:31:24.104997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.556 [2024-12-10 14:31:24.105012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.556 qpair failed and we were unable to recover it. 00:29:23.556 [2024-12-10 14:31:24.114897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.556 [2024-12-10 14:31:24.114953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.556 [2024-12-10 14:31:24.114966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.556 [2024-12-10 14:31:24.114973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.556 [2024-12-10 14:31:24.114980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.556 [2024-12-10 14:31:24.114995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.556 qpair failed and we were unable to recover it. 00:29:23.556 [2024-12-10 14:31:24.124947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.556 [2024-12-10 14:31:24.125037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.556 [2024-12-10 14:31:24.125050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.556 [2024-12-10 14:31:24.125057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.556 [2024-12-10 14:31:24.125063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.556 [2024-12-10 14:31:24.125077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.556 qpair failed and we were unable to recover it. 00:29:23.556 [2024-12-10 14:31:24.134877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.556 [2024-12-10 14:31:24.134936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.556 [2024-12-10 14:31:24.134950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.556 [2024-12-10 14:31:24.134957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.556 [2024-12-10 14:31:24.134966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.556 [2024-12-10 14:31:24.134980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.556 qpair failed and we were unable to recover it. 00:29:23.556 [2024-12-10 14:31:24.144978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.556 [2024-12-10 14:31:24.145028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.556 [2024-12-10 14:31:24.145041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.556 [2024-12-10 14:31:24.145048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.556 [2024-12-10 14:31:24.145054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.556 [2024-12-10 14:31:24.145069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.556 qpair failed and we were unable to recover it. 00:29:23.556 [2024-12-10 14:31:24.154926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.556 [2024-12-10 14:31:24.154982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.556 [2024-12-10 14:31:24.154995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.556 [2024-12-10 14:31:24.155002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.556 [2024-12-10 14:31:24.155009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.557 [2024-12-10 14:31:24.155025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.557 qpair failed and we were unable to recover it. 00:29:23.557 [2024-12-10 14:31:24.165041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.557 [2024-12-10 14:31:24.165098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.557 [2024-12-10 14:31:24.165112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.557 [2024-12-10 14:31:24.165119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.557 [2024-12-10 14:31:24.165125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.557 [2024-12-10 14:31:24.165140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.557 qpair failed and we were unable to recover it. 00:29:23.557 [2024-12-10 14:31:24.175075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.557 [2024-12-10 14:31:24.175130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.557 [2024-12-10 14:31:24.175144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.557 [2024-12-10 14:31:24.175150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.557 [2024-12-10 14:31:24.175157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.557 [2024-12-10 14:31:24.175172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.557 qpair failed and we were unable to recover it. 00:29:23.557 [2024-12-10 14:31:24.185106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.557 [2024-12-10 14:31:24.185160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.557 [2024-12-10 14:31:24.185172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.557 [2024-12-10 14:31:24.185180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.557 [2024-12-10 14:31:24.185185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.557 [2024-12-10 14:31:24.185200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.557 qpair failed and we were unable to recover it. 00:29:23.557 [2024-12-10 14:31:24.195110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.557 [2024-12-10 14:31:24.195163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.557 [2024-12-10 14:31:24.195177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.557 [2024-12-10 14:31:24.195184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.557 [2024-12-10 14:31:24.195190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.557 [2024-12-10 14:31:24.195205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.557 qpair failed and we were unable to recover it. 00:29:23.557 [2024-12-10 14:31:24.205159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.557 [2024-12-10 14:31:24.205212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.557 [2024-12-10 14:31:24.205235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.557 [2024-12-10 14:31:24.205243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.557 [2024-12-10 14:31:24.205249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.557 [2024-12-10 14:31:24.205265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.557 qpair failed and we were unable to recover it. 00:29:23.557 [2024-12-10 14:31:24.215174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.557 [2024-12-10 14:31:24.215233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.557 [2024-12-10 14:31:24.215246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.557 [2024-12-10 14:31:24.215253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.557 [2024-12-10 14:31:24.215259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.557 [2024-12-10 14:31:24.215274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.557 qpair failed and we were unable to recover it. 00:29:23.557 [2024-12-10 14:31:24.225158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.557 [2024-12-10 14:31:24.225214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.557 [2024-12-10 14:31:24.225234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.557 [2024-12-10 14:31:24.225241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.557 [2024-12-10 14:31:24.225247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.557 [2024-12-10 14:31:24.225262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.557 qpair failed and we were unable to recover it. 00:29:23.557 [2024-12-10 14:31:24.235244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.557 [2024-12-10 14:31:24.235306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.557 [2024-12-10 14:31:24.235319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.557 [2024-12-10 14:31:24.235326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.557 [2024-12-10 14:31:24.235332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.557 [2024-12-10 14:31:24.235346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.557 qpair failed and we were unable to recover it. 00:29:23.557 [2024-12-10 14:31:24.245265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.557 [2024-12-10 14:31:24.245341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.557 [2024-12-10 14:31:24.245354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.557 [2024-12-10 14:31:24.245361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.557 [2024-12-10 14:31:24.245367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.557 [2024-12-10 14:31:24.245381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.557 qpair failed and we were unable to recover it. 00:29:23.557 [2024-12-10 14:31:24.255250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.557 [2024-12-10 14:31:24.255298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.557 [2024-12-10 14:31:24.255311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.557 [2024-12-10 14:31:24.255318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.557 [2024-12-10 14:31:24.255324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.557 [2024-12-10 14:31:24.255339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.557 qpair failed and we were unable to recover it. 00:29:23.557 [2024-12-10 14:31:24.265306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.557 [2024-12-10 14:31:24.265356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.557 [2024-12-10 14:31:24.265369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.557 [2024-12-10 14:31:24.265380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.557 [2024-12-10 14:31:24.265386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.557 [2024-12-10 14:31:24.265400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.557 qpair failed and we were unable to recover it. 00:29:23.557 [2024-12-10 14:31:24.275391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.557 [2024-12-10 14:31:24.275446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.557 [2024-12-10 14:31:24.275459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.557 [2024-12-10 14:31:24.275466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.557 [2024-12-10 14:31:24.275472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.557 [2024-12-10 14:31:24.275486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.557 qpair failed and we were unable to recover it. 00:29:23.557 [2024-12-10 14:31:24.285371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.557 [2024-12-10 14:31:24.285435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.557 [2024-12-10 14:31:24.285448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.557 [2024-12-10 14:31:24.285455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.557 [2024-12-10 14:31:24.285461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.557 [2024-12-10 14:31:24.285475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.557 qpair failed and we were unable to recover it. 00:29:23.816 [2024-12-10 14:31:24.295498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.816 [2024-12-10 14:31:24.295557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.816 [2024-12-10 14:31:24.295570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.816 [2024-12-10 14:31:24.295577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.816 [2024-12-10 14:31:24.295584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.816 [2024-12-10 14:31:24.295599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.816 qpair failed and we were unable to recover it. 00:29:23.816 [2024-12-10 14:31:24.305473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.816 [2024-12-10 14:31:24.305525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.816 [2024-12-10 14:31:24.305538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.816 [2024-12-10 14:31:24.305545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.816 [2024-12-10 14:31:24.305552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.816 [2024-12-10 14:31:24.305570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.816 qpair failed and we were unable to recover it. 00:29:23.816 [2024-12-10 14:31:24.315522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.816 [2024-12-10 14:31:24.315578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.816 [2024-12-10 14:31:24.315592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.816 [2024-12-10 14:31:24.315598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.816 [2024-12-10 14:31:24.315605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.816 [2024-12-10 14:31:24.315619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.816 qpair failed and we were unable to recover it. 00:29:23.816 [2024-12-10 14:31:24.325523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.816 [2024-12-10 14:31:24.325596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.816 [2024-12-10 14:31:24.325609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.816 [2024-12-10 14:31:24.325615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.816 [2024-12-10 14:31:24.325622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.816 [2024-12-10 14:31:24.325636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.816 qpair failed and we were unable to recover it. 00:29:23.816 [2024-12-10 14:31:24.335548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.816 [2024-12-10 14:31:24.335604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.816 [2024-12-10 14:31:24.335617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.816 [2024-12-10 14:31:24.335624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.816 [2024-12-10 14:31:24.335630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.816 [2024-12-10 14:31:24.335644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.816 qpair failed and we were unable to recover it. 00:29:23.816 [2024-12-10 14:31:24.345580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.816 [2024-12-10 14:31:24.345631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.816 [2024-12-10 14:31:24.345645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.816 [2024-12-10 14:31:24.345652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.816 [2024-12-10 14:31:24.345658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.816 [2024-12-10 14:31:24.345672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.816 qpair failed and we were unable to recover it. 00:29:23.816 [2024-12-10 14:31:24.355595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.816 [2024-12-10 14:31:24.355655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.816 [2024-12-10 14:31:24.355668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.816 [2024-12-10 14:31:24.355676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.816 [2024-12-10 14:31:24.355682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.816 [2024-12-10 14:31:24.355697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.816 qpair failed and we were unable to recover it. 00:29:23.816 [2024-12-10 14:31:24.365616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.816 [2024-12-10 14:31:24.365674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.816 [2024-12-10 14:31:24.365687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.816 [2024-12-10 14:31:24.365694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.816 [2024-12-10 14:31:24.365701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.816 [2024-12-10 14:31:24.365715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.817 qpair failed and we were unable to recover it. 00:29:23.817 [2024-12-10 14:31:24.375636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.817 [2024-12-10 14:31:24.375706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.817 [2024-12-10 14:31:24.375718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.817 [2024-12-10 14:31:24.375725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.817 [2024-12-10 14:31:24.375732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.817 [2024-12-10 14:31:24.375747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.817 qpair failed and we were unable to recover it. 00:29:23.817 [2024-12-10 14:31:24.385677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.817 [2024-12-10 14:31:24.385734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.817 [2024-12-10 14:31:24.385746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.817 [2024-12-10 14:31:24.385754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.817 [2024-12-10 14:31:24.385760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.817 [2024-12-10 14:31:24.385775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.817 qpair failed and we were unable to recover it. 00:29:23.817 [2024-12-10 14:31:24.395710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.817 [2024-12-10 14:31:24.395758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.817 [2024-12-10 14:31:24.395771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.817 [2024-12-10 14:31:24.395781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.817 [2024-12-10 14:31:24.395788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.817 [2024-12-10 14:31:24.395802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.817 qpair failed and we were unable to recover it. 00:29:23.817 [2024-12-10 14:31:24.405747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.817 [2024-12-10 14:31:24.405799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.817 [2024-12-10 14:31:24.405812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.817 [2024-12-10 14:31:24.405819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.817 [2024-12-10 14:31:24.405826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.817 [2024-12-10 14:31:24.405840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.817 qpair failed and we were unable to recover it. 00:29:23.817 [2024-12-10 14:31:24.415774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.817 [2024-12-10 14:31:24.415835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.817 [2024-12-10 14:31:24.415848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.817 [2024-12-10 14:31:24.415855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.817 [2024-12-10 14:31:24.415862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.817 [2024-12-10 14:31:24.415877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.817 qpair failed and we were unable to recover it. 00:29:23.817 [2024-12-10 14:31:24.425801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.817 [2024-12-10 14:31:24.425855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.817 [2024-12-10 14:31:24.425868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.817 [2024-12-10 14:31:24.425876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.817 [2024-12-10 14:31:24.425882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.817 [2024-12-10 14:31:24.425897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.817 qpair failed and we were unable to recover it. 00:29:23.817 [2024-12-10 14:31:24.435823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.817 [2024-12-10 14:31:24.435880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.817 [2024-12-10 14:31:24.435893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.817 [2024-12-10 14:31:24.435900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.817 [2024-12-10 14:31:24.435907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.817 [2024-12-10 14:31:24.435924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.817 qpair failed and we were unable to recover it. 00:29:23.817 [2024-12-10 14:31:24.445865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.817 [2024-12-10 14:31:24.445969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.817 [2024-12-10 14:31:24.445983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.817 [2024-12-10 14:31:24.445989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.817 [2024-12-10 14:31:24.445995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.817 [2024-12-10 14:31:24.446010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.817 qpair failed and we were unable to recover it. 00:29:23.817 [2024-12-10 14:31:24.455822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.817 [2024-12-10 14:31:24.455874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.817 [2024-12-10 14:31:24.455887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.817 [2024-12-10 14:31:24.455893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.817 [2024-12-10 14:31:24.455900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.817 [2024-12-10 14:31:24.455915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.817 qpair failed and we were unable to recover it. 00:29:23.817 [2024-12-10 14:31:24.465913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.817 [2024-12-10 14:31:24.465963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.817 [2024-12-10 14:31:24.465976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.817 [2024-12-10 14:31:24.465983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.817 [2024-12-10 14:31:24.465989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.817 [2024-12-10 14:31:24.466003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.817 qpair failed and we were unable to recover it. 00:29:23.817 [2024-12-10 14:31:24.475949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.817 [2024-12-10 14:31:24.476003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.817 [2024-12-10 14:31:24.476016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.817 [2024-12-10 14:31:24.476023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.817 [2024-12-10 14:31:24.476029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.817 [2024-12-10 14:31:24.476043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.817 qpair failed and we were unable to recover it. 00:29:23.817 [2024-12-10 14:31:24.485961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.817 [2024-12-10 14:31:24.486016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.817 [2024-12-10 14:31:24.486030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.817 [2024-12-10 14:31:24.486037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.817 [2024-12-10 14:31:24.486043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.817 [2024-12-10 14:31:24.486058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.817 qpair failed and we were unable to recover it. 00:29:23.817 [2024-12-10 14:31:24.496003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.817 [2024-12-10 14:31:24.496056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.817 [2024-12-10 14:31:24.496068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.817 [2024-12-10 14:31:24.496076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.817 [2024-12-10 14:31:24.496083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.817 [2024-12-10 14:31:24.496097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.817 qpair failed and we were unable to recover it. 00:29:23.817 [2024-12-10 14:31:24.506035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.818 [2024-12-10 14:31:24.506090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.818 [2024-12-10 14:31:24.506103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.818 [2024-12-10 14:31:24.506111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.818 [2024-12-10 14:31:24.506117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.818 [2024-12-10 14:31:24.506132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.818 qpair failed and we were unable to recover it. 00:29:23.818 [2024-12-10 14:31:24.516015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.818 [2024-12-10 14:31:24.516117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.818 [2024-12-10 14:31:24.516131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.818 [2024-12-10 14:31:24.516138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.818 [2024-12-10 14:31:24.516144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.818 [2024-12-10 14:31:24.516158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.818 qpair failed and we were unable to recover it. 00:29:23.818 [2024-12-10 14:31:24.526141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.818 [2024-12-10 14:31:24.526200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.818 [2024-12-10 14:31:24.526216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.818 [2024-12-10 14:31:24.526227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.818 [2024-12-10 14:31:24.526234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.818 [2024-12-10 14:31:24.526249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.818 qpair failed and we were unable to recover it. 00:29:23.818 [2024-12-10 14:31:24.536120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.818 [2024-12-10 14:31:24.536181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.818 [2024-12-10 14:31:24.536195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.818 [2024-12-10 14:31:24.536202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.818 [2024-12-10 14:31:24.536208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.818 [2024-12-10 14:31:24.536226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.818 qpair failed and we were unable to recover it. 00:29:23.818 [2024-12-10 14:31:24.546175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.818 [2024-12-10 14:31:24.546236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.818 [2024-12-10 14:31:24.546250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.818 [2024-12-10 14:31:24.546257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.818 [2024-12-10 14:31:24.546263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:23.818 [2024-12-10 14:31:24.546278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.818 qpair failed and we were unable to recover it. 00:29:24.076 [2024-12-10 14:31:24.556214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.076 [2024-12-10 14:31:24.556306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.076 [2024-12-10 14:31:24.556320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.076 [2024-12-10 14:31:24.556327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.076 [2024-12-10 14:31:24.556333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.076 [2024-12-10 14:31:24.556348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.076 qpair failed and we were unable to recover it. 00:29:24.076 [2024-12-10 14:31:24.566197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.076 [2024-12-10 14:31:24.566260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.076 [2024-12-10 14:31:24.566273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.076 [2024-12-10 14:31:24.566280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.076 [2024-12-10 14:31:24.566291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.076 [2024-12-10 14:31:24.566306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.076 qpair failed and we were unable to recover it. 00:29:24.076 [2024-12-10 14:31:24.576208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.076 [2024-12-10 14:31:24.576274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.076 [2024-12-10 14:31:24.576287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.076 [2024-12-10 14:31:24.576295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.076 [2024-12-10 14:31:24.576302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.076 [2024-12-10 14:31:24.576317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.076 qpair failed and we were unable to recover it. 00:29:24.076 [2024-12-10 14:31:24.586271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.076 [2024-12-10 14:31:24.586326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.076 [2024-12-10 14:31:24.586339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.076 [2024-12-10 14:31:24.586346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.076 [2024-12-10 14:31:24.586352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.076 [2024-12-10 14:31:24.586367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.076 qpair failed and we were unable to recover it. 00:29:24.076 [2024-12-10 14:31:24.596289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.076 [2024-12-10 14:31:24.596374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.076 [2024-12-10 14:31:24.596387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.076 [2024-12-10 14:31:24.596394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.076 [2024-12-10 14:31:24.596400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.076 [2024-12-10 14:31:24.596415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.076 qpair failed and we were unable to recover it. 00:29:24.076 [2024-12-10 14:31:24.606342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.076 [2024-12-10 14:31:24.606444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.076 [2024-12-10 14:31:24.606458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.076 [2024-12-10 14:31:24.606465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.076 [2024-12-10 14:31:24.606471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.076 [2024-12-10 14:31:24.606485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.076 qpair failed and we were unable to recover it. 00:29:24.076 [2024-12-10 14:31:24.616346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.076 [2024-12-10 14:31:24.616407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.076 [2024-12-10 14:31:24.616420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.076 [2024-12-10 14:31:24.616427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.076 [2024-12-10 14:31:24.616434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.076 [2024-12-10 14:31:24.616448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.076 qpair failed and we were unable to recover it. 00:29:24.076 [2024-12-10 14:31:24.626372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.076 [2024-12-10 14:31:24.626424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.076 [2024-12-10 14:31:24.626437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.076 [2024-12-10 14:31:24.626444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.077 [2024-12-10 14:31:24.626450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.077 [2024-12-10 14:31:24.626464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.077 qpair failed and we were unable to recover it. 00:29:24.077 [2024-12-10 14:31:24.636396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.077 [2024-12-10 14:31:24.636448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.077 [2024-12-10 14:31:24.636461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.077 [2024-12-10 14:31:24.636468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.077 [2024-12-10 14:31:24.636474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.077 [2024-12-10 14:31:24.636489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.077 qpair failed and we were unable to recover it. 00:29:24.077 [2024-12-10 14:31:24.646437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.077 [2024-12-10 14:31:24.646494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.077 [2024-12-10 14:31:24.646507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.077 [2024-12-10 14:31:24.646514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.077 [2024-12-10 14:31:24.646521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.077 [2024-12-10 14:31:24.646535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.077 qpair failed and we were unable to recover it. 00:29:24.077 [2024-12-10 14:31:24.656456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.077 [2024-12-10 14:31:24.656510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.077 [2024-12-10 14:31:24.656525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.077 [2024-12-10 14:31:24.656532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.077 [2024-12-10 14:31:24.656539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.077 [2024-12-10 14:31:24.656554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.077 qpair failed and we were unable to recover it. 00:29:24.077 [2024-12-10 14:31:24.666507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.077 [2024-12-10 14:31:24.666563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.077 [2024-12-10 14:31:24.666576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.077 [2024-12-10 14:31:24.666583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.077 [2024-12-10 14:31:24.666589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.077 [2024-12-10 14:31:24.666604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.077 qpair failed and we were unable to recover it. 00:29:24.077 [2024-12-10 14:31:24.676515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.077 [2024-12-10 14:31:24.676564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.077 [2024-12-10 14:31:24.676577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.077 [2024-12-10 14:31:24.676584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.077 [2024-12-10 14:31:24.676590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.077 [2024-12-10 14:31:24.676605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.077 qpair failed and we were unable to recover it. 00:29:24.077 [2024-12-10 14:31:24.686546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.077 [2024-12-10 14:31:24.686602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.077 [2024-12-10 14:31:24.686614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.077 [2024-12-10 14:31:24.686621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.077 [2024-12-10 14:31:24.686628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.077 [2024-12-10 14:31:24.686642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.077 qpair failed and we were unable to recover it. 00:29:24.077 [2024-12-10 14:31:24.696598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.077 [2024-12-10 14:31:24.696653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.077 [2024-12-10 14:31:24.696666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.077 [2024-12-10 14:31:24.696673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.077 [2024-12-10 14:31:24.696682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.077 [2024-12-10 14:31:24.696697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.077 qpair failed and we were unable to recover it. 00:29:24.077 [2024-12-10 14:31:24.706570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.077 [2024-12-10 14:31:24.706625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.077 [2024-12-10 14:31:24.706638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.077 [2024-12-10 14:31:24.706645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.077 [2024-12-10 14:31:24.706652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.077 [2024-12-10 14:31:24.706666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.077 qpair failed and we were unable to recover it. 00:29:24.077 [2024-12-10 14:31:24.716655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.077 [2024-12-10 14:31:24.716709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.077 [2024-12-10 14:31:24.716722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.077 [2024-12-10 14:31:24.716730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.077 [2024-12-10 14:31:24.716736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.077 [2024-12-10 14:31:24.716752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.077 qpair failed and we were unable to recover it. 00:29:24.077 [2024-12-10 14:31:24.726651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.077 [2024-12-10 14:31:24.726729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.077 [2024-12-10 14:31:24.726742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.077 [2024-12-10 14:31:24.726750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.077 [2024-12-10 14:31:24.726757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.077 [2024-12-10 14:31:24.726772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.077 qpair failed and we were unable to recover it. 00:29:24.077 [2024-12-10 14:31:24.736677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.077 [2024-12-10 14:31:24.736733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.077 [2024-12-10 14:31:24.736746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.077 [2024-12-10 14:31:24.736754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.077 [2024-12-10 14:31:24.736761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.077 [2024-12-10 14:31:24.736775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.077 qpair failed and we were unable to recover it. 00:29:24.077 [2024-12-10 14:31:24.746709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.077 [2024-12-10 14:31:24.746761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.077 [2024-12-10 14:31:24.746775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.077 [2024-12-10 14:31:24.746782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.077 [2024-12-10 14:31:24.746788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.077 [2024-12-10 14:31:24.746802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.077 qpair failed and we were unable to recover it. 00:29:24.077 [2024-12-10 14:31:24.756723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.077 [2024-12-10 14:31:24.756812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.077 [2024-12-10 14:31:24.756825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.077 [2024-12-10 14:31:24.756831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.077 [2024-12-10 14:31:24.756837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.077 [2024-12-10 14:31:24.756851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.077 qpair failed and we were unable to recover it. 00:29:24.078 [2024-12-10 14:31:24.766792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.078 [2024-12-10 14:31:24.766854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.078 [2024-12-10 14:31:24.766867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.078 [2024-12-10 14:31:24.766875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.078 [2024-12-10 14:31:24.766881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.078 [2024-12-10 14:31:24.766895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.078 qpair failed and we were unable to recover it. 00:29:24.078 [2024-12-10 14:31:24.776788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.078 [2024-12-10 14:31:24.776844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.078 [2024-12-10 14:31:24.776857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.078 [2024-12-10 14:31:24.776864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.078 [2024-12-10 14:31:24.776870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.078 [2024-12-10 14:31:24.776884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.078 qpair failed and we were unable to recover it. 00:29:24.078 [2024-12-10 14:31:24.786813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.078 [2024-12-10 14:31:24.786870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.078 [2024-12-10 14:31:24.786887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.078 [2024-12-10 14:31:24.786894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.078 [2024-12-10 14:31:24.786901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.078 [2024-12-10 14:31:24.786915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.078 qpair failed and we were unable to recover it. 00:29:24.078 [2024-12-10 14:31:24.796865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.078 [2024-12-10 14:31:24.796951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.078 [2024-12-10 14:31:24.796964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.078 [2024-12-10 14:31:24.796971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.078 [2024-12-10 14:31:24.796977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.078 [2024-12-10 14:31:24.796991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.078 qpair failed and we were unable to recover it. 00:29:24.078 [2024-12-10 14:31:24.806884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.078 [2024-12-10 14:31:24.806940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.078 [2024-12-10 14:31:24.806954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.078 [2024-12-10 14:31:24.806961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.078 [2024-12-10 14:31:24.806967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.078 [2024-12-10 14:31:24.806982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.078 qpair failed and we were unable to recover it. 00:29:24.336 [2024-12-10 14:31:24.816905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.336 [2024-12-10 14:31:24.816982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.336 [2024-12-10 14:31:24.816995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.336 [2024-12-10 14:31:24.817002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.336 [2024-12-10 14:31:24.817008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.336 [2024-12-10 14:31:24.817022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.336 qpair failed and we were unable to recover it. 00:29:24.336 [2024-12-10 14:31:24.826966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.336 [2024-12-10 14:31:24.827047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.336 [2024-12-10 14:31:24.827060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.336 [2024-12-10 14:31:24.827070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.336 [2024-12-10 14:31:24.827076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.336 [2024-12-10 14:31:24.827090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.336 qpair failed and we were unable to recover it. 00:29:24.336 [2024-12-10 14:31:24.836963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.336 [2024-12-10 14:31:24.837025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.336 [2024-12-10 14:31:24.837038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.336 [2024-12-10 14:31:24.837045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.336 [2024-12-10 14:31:24.837051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.336 [2024-12-10 14:31:24.837066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.336 qpair failed and we were unable to recover it. 00:29:24.336 [2024-12-10 14:31:24.846995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.336 [2024-12-10 14:31:24.847053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.336 [2024-12-10 14:31:24.847066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.336 [2024-12-10 14:31:24.847073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.336 [2024-12-10 14:31:24.847080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.336 [2024-12-10 14:31:24.847094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.336 qpair failed and we were unable to recover it. 00:29:24.336 [2024-12-10 14:31:24.857017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.336 [2024-12-10 14:31:24.857117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.336 [2024-12-10 14:31:24.857130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.336 [2024-12-10 14:31:24.857138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.336 [2024-12-10 14:31:24.857144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.336 [2024-12-10 14:31:24.857158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.336 qpair failed and we were unable to recover it. 00:29:24.336 [2024-12-10 14:31:24.867067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.337 [2024-12-10 14:31:24.867129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.337 [2024-12-10 14:31:24.867141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.337 [2024-12-10 14:31:24.867149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.337 [2024-12-10 14:31:24.867155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.337 [2024-12-10 14:31:24.867173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.337 qpair failed and we were unable to recover it. 00:29:24.337 [2024-12-10 14:31:24.877102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.337 [2024-12-10 14:31:24.877160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.337 [2024-12-10 14:31:24.877174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.337 [2024-12-10 14:31:24.877181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.337 [2024-12-10 14:31:24.877187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.337 [2024-12-10 14:31:24.877201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.337 qpair failed and we were unable to recover it. 00:29:24.337 [2024-12-10 14:31:24.887109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.337 [2024-12-10 14:31:24.887166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.337 [2024-12-10 14:31:24.887179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.337 [2024-12-10 14:31:24.887186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.337 [2024-12-10 14:31:24.887193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.337 [2024-12-10 14:31:24.887207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.337 qpair failed and we were unable to recover it. 00:29:24.337 [2024-12-10 14:31:24.897162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.337 [2024-12-10 14:31:24.897215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.337 [2024-12-10 14:31:24.897231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.337 [2024-12-10 14:31:24.897238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.337 [2024-12-10 14:31:24.897245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.337 [2024-12-10 14:31:24.897259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.337 qpair failed and we were unable to recover it. 00:29:24.337 [2024-12-10 14:31:24.907183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.337 [2024-12-10 14:31:24.907237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.337 [2024-12-10 14:31:24.907250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.337 [2024-12-10 14:31:24.907258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.337 [2024-12-10 14:31:24.907264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.337 [2024-12-10 14:31:24.907279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.337 qpair failed and we were unable to recover it. 00:29:24.337 [2024-12-10 14:31:24.917184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.337 [2024-12-10 14:31:24.917243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.337 [2024-12-10 14:31:24.917257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.337 [2024-12-10 14:31:24.917264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.337 [2024-12-10 14:31:24.917270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.337 [2024-12-10 14:31:24.917285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.337 qpair failed and we were unable to recover it. 00:29:24.337 [2024-12-10 14:31:24.927238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.337 [2024-12-10 14:31:24.927310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.337 [2024-12-10 14:31:24.927323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.337 [2024-12-10 14:31:24.927330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.337 [2024-12-10 14:31:24.927336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.337 [2024-12-10 14:31:24.927352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.337 qpair failed and we were unable to recover it. 00:29:24.337 [2024-12-10 14:31:24.937296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.337 [2024-12-10 14:31:24.937356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.337 [2024-12-10 14:31:24.937369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.337 [2024-12-10 14:31:24.937376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.337 [2024-12-10 14:31:24.937382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.337 [2024-12-10 14:31:24.937398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.337 qpair failed and we were unable to recover it. 00:29:24.337 [2024-12-10 14:31:24.947272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.337 [2024-12-10 14:31:24.947320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.337 [2024-12-10 14:31:24.947333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.337 [2024-12-10 14:31:24.947340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.337 [2024-12-10 14:31:24.947346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.337 [2024-12-10 14:31:24.947360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.337 qpair failed and we were unable to recover it. 00:29:24.337 [2024-12-10 14:31:24.957304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.337 [2024-12-10 14:31:24.957361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.337 [2024-12-10 14:31:24.957374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.337 [2024-12-10 14:31:24.957385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.337 [2024-12-10 14:31:24.957391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.337 [2024-12-10 14:31:24.957406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.337 qpair failed and we were unable to recover it. 00:29:24.337 [2024-12-10 14:31:24.967349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.337 [2024-12-10 14:31:24.967411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.337 [2024-12-10 14:31:24.967424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.337 [2024-12-10 14:31:24.967431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.337 [2024-12-10 14:31:24.967437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.337 [2024-12-10 14:31:24.967452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.337 qpair failed and we were unable to recover it. 00:29:24.337 [2024-12-10 14:31:24.977390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.337 [2024-12-10 14:31:24.977443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.337 [2024-12-10 14:31:24.977456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.337 [2024-12-10 14:31:24.977463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.337 [2024-12-10 14:31:24.977469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.337 [2024-12-10 14:31:24.977483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.337 qpair failed and we were unable to recover it. 00:29:24.337 [2024-12-10 14:31:24.987385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.337 [2024-12-10 14:31:24.987438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.337 [2024-12-10 14:31:24.987452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.337 [2024-12-10 14:31:24.987459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.337 [2024-12-10 14:31:24.987465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.337 [2024-12-10 14:31:24.987479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.337 qpair failed and we were unable to recover it. 00:29:24.337 [2024-12-10 14:31:24.997448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.337 [2024-12-10 14:31:24.997506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.338 [2024-12-10 14:31:24.997520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.338 [2024-12-10 14:31:24.997527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.338 [2024-12-10 14:31:24.997533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.338 [2024-12-10 14:31:24.997550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.338 qpair failed and we were unable to recover it. 00:29:24.338 [2024-12-10 14:31:25.007441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.338 [2024-12-10 14:31:25.007498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.338 [2024-12-10 14:31:25.007512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.338 [2024-12-10 14:31:25.007519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.338 [2024-12-10 14:31:25.007525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.338 [2024-12-10 14:31:25.007540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.338 qpair failed and we were unable to recover it. 00:29:24.338 [2024-12-10 14:31:25.017480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.338 [2024-12-10 14:31:25.017533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.338 [2024-12-10 14:31:25.017545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.338 [2024-12-10 14:31:25.017552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.338 [2024-12-10 14:31:25.017559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.338 [2024-12-10 14:31:25.017574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.338 qpair failed and we were unable to recover it. 00:29:24.338 [2024-12-10 14:31:25.027545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.338 [2024-12-10 14:31:25.027600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.338 [2024-12-10 14:31:25.027612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.338 [2024-12-10 14:31:25.027619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.338 [2024-12-10 14:31:25.027625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.338 [2024-12-10 14:31:25.027640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.338 qpair failed and we were unable to recover it. 00:29:24.338 [2024-12-10 14:31:25.037529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.338 [2024-12-10 14:31:25.037587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.338 [2024-12-10 14:31:25.037600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.338 [2024-12-10 14:31:25.037607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.338 [2024-12-10 14:31:25.037613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.338 [2024-12-10 14:31:25.037627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.338 qpair failed and we were unable to recover it. 00:29:24.338 [2024-12-10 14:31:25.047584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.338 [2024-12-10 14:31:25.047647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.338 [2024-12-10 14:31:25.047660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.338 [2024-12-10 14:31:25.047667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.338 [2024-12-10 14:31:25.047673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.338 [2024-12-10 14:31:25.047687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.338 qpair failed and we were unable to recover it. 00:29:24.338 [2024-12-10 14:31:25.057578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.338 [2024-12-10 14:31:25.057685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.338 [2024-12-10 14:31:25.057699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.338 [2024-12-10 14:31:25.057705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.338 [2024-12-10 14:31:25.057712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.338 [2024-12-10 14:31:25.057726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.338 qpair failed and we were unable to recover it. 00:29:24.338 [2024-12-10 14:31:25.067630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.338 [2024-12-10 14:31:25.067682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.338 [2024-12-10 14:31:25.067695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.338 [2024-12-10 14:31:25.067702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.338 [2024-12-10 14:31:25.067708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.338 [2024-12-10 14:31:25.067722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.338 qpair failed and we were unable to recover it. 00:29:24.595 [2024-12-10 14:31:25.077665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.595 [2024-12-10 14:31:25.077720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.595 [2024-12-10 14:31:25.077733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.595 [2024-12-10 14:31:25.077741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.595 [2024-12-10 14:31:25.077748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.595 [2024-12-10 14:31:25.077762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.595 qpair failed and we were unable to recover it. 00:29:24.595 [2024-12-10 14:31:25.087667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.595 [2024-12-10 14:31:25.087726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.595 [2024-12-10 14:31:25.087741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.595 [2024-12-10 14:31:25.087749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.595 [2024-12-10 14:31:25.087755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.595 [2024-12-10 14:31:25.087769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.595 qpair failed and we were unable to recover it. 00:29:24.595 [2024-12-10 14:31:25.097707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.595 [2024-12-10 14:31:25.097802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.595 [2024-12-10 14:31:25.097815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.595 [2024-12-10 14:31:25.097822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.595 [2024-12-10 14:31:25.097828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.595 [2024-12-10 14:31:25.097842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.595 qpair failed and we were unable to recover it. 00:29:24.595 [2024-12-10 14:31:25.107756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.595 [2024-12-10 14:31:25.107807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.595 [2024-12-10 14:31:25.107821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.595 [2024-12-10 14:31:25.107828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.595 [2024-12-10 14:31:25.107834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.595 [2024-12-10 14:31:25.107848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.595 qpair failed and we were unable to recover it. 00:29:24.595 [2024-12-10 14:31:25.117781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.595 [2024-12-10 14:31:25.117838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.595 [2024-12-10 14:31:25.117851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.595 [2024-12-10 14:31:25.117859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.595 [2024-12-10 14:31:25.117865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.595 [2024-12-10 14:31:25.117879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.595 qpair failed and we were unable to recover it. 00:29:24.595 [2024-12-10 14:31:25.127801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.595 [2024-12-10 14:31:25.127855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.595 [2024-12-10 14:31:25.127869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.595 [2024-12-10 14:31:25.127876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.595 [2024-12-10 14:31:25.127885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.595 [2024-12-10 14:31:25.127900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.595 qpair failed and we were unable to recover it. 00:29:24.595 [2024-12-10 14:31:25.137829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.595 [2024-12-10 14:31:25.137882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.595 [2024-12-10 14:31:25.137895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.595 [2024-12-10 14:31:25.137902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.595 [2024-12-10 14:31:25.137908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.595 [2024-12-10 14:31:25.137923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.595 qpair failed and we were unable to recover it. 00:29:24.595 [2024-12-10 14:31:25.147890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.147948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.147962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.147969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.147975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.147990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.157876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.157928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.157941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.157948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.157954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.157968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.167907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.167960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.167973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.167980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.167986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.168001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.177857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.177914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.177927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.177934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.177940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.177955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.187962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.188064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.188077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.188084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.188091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.188105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.197990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.198041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.198054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.198061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.198068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.198083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.208021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.208077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.208090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.208097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.208104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.208119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.218055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.218139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.218156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.218163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.218169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.218184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.228101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.228155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.228168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.228175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.228182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.228196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.238104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.238156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.238170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.238177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.238183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.238197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.248140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.248197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.248210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.248220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.248226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.248241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.258146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.258213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.258230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.258238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.258247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.258262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.268188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.268238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.268251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.268258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.268264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.268279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.278215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.278275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.278289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.278296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.278302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.278317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.288258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.288359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.288372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.288379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.288385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.288401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.298310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.298373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.298386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.298393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.298399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.298414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.308323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.308373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.308387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.308394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.308400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.308415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.318324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.596 [2024-12-10 14:31:25.318378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.596 [2024-12-10 14:31:25.318391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.596 [2024-12-10 14:31:25.318398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.596 [2024-12-10 14:31:25.318405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.596 [2024-12-10 14:31:25.318420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.596 qpair failed and we were unable to recover it. 00:29:24.596 [2024-12-10 14:31:25.328389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.597 [2024-12-10 14:31:25.328448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.597 [2024-12-10 14:31:25.328461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.597 [2024-12-10 14:31:25.328468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.597 [2024-12-10 14:31:25.328475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.597 [2024-12-10 14:31:25.328489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.597 qpair failed and we were unable to recover it. 00:29:24.855 [2024-12-10 14:31:25.338405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.855 [2024-12-10 14:31:25.338463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.855 [2024-12-10 14:31:25.338476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.855 [2024-12-10 14:31:25.338483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.855 [2024-12-10 14:31:25.338490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.855 [2024-12-10 14:31:25.338504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.855 qpair failed and we were unable to recover it. 00:29:24.855 [2024-12-10 14:31:25.348381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.855 [2024-12-10 14:31:25.348438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.855 [2024-12-10 14:31:25.348454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.855 [2024-12-10 14:31:25.348462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.855 [2024-12-10 14:31:25.348468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.855 [2024-12-10 14:31:25.348484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.855 qpair failed and we were unable to recover it. 00:29:24.855 [2024-12-10 14:31:25.358423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.855 [2024-12-10 14:31:25.358491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.855 [2024-12-10 14:31:25.358504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.855 [2024-12-10 14:31:25.358512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.855 [2024-12-10 14:31:25.358518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.855 [2024-12-10 14:31:25.358532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.855 qpair failed and we were unable to recover it. 00:29:24.855 [2024-12-10 14:31:25.368481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.855 [2024-12-10 14:31:25.368538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.855 [2024-12-10 14:31:25.368552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.855 [2024-12-10 14:31:25.368559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.855 [2024-12-10 14:31:25.368565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.855 [2024-12-10 14:31:25.368580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.855 qpair failed and we were unable to recover it. 00:29:24.855 [2024-12-10 14:31:25.378490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.855 [2024-12-10 14:31:25.378566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.855 [2024-12-10 14:31:25.378579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.855 [2024-12-10 14:31:25.378587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.855 [2024-12-10 14:31:25.378593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.855 [2024-12-10 14:31:25.378607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.855 qpair failed and we were unable to recover it. 00:29:24.855 [2024-12-10 14:31:25.388539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.855 [2024-12-10 14:31:25.388598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.855 [2024-12-10 14:31:25.388611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.855 [2024-12-10 14:31:25.388621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.855 [2024-12-10 14:31:25.388627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.855 [2024-12-10 14:31:25.388642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.855 qpair failed and we were unable to recover it. 00:29:24.855 [2024-12-10 14:31:25.398498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.855 [2024-12-10 14:31:25.398555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.855 [2024-12-10 14:31:25.398568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.855 [2024-12-10 14:31:25.398575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.855 [2024-12-10 14:31:25.398582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.855 [2024-12-10 14:31:25.398596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.855 qpair failed and we were unable to recover it. 00:29:24.855 [2024-12-10 14:31:25.408538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.855 [2024-12-10 14:31:25.408593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.855 [2024-12-10 14:31:25.408606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.855 [2024-12-10 14:31:25.408613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.855 [2024-12-10 14:31:25.408620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.855 [2024-12-10 14:31:25.408635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.855 qpair failed and we were unable to recover it. 00:29:24.855 [2024-12-10 14:31:25.418555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.855 [2024-12-10 14:31:25.418610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.855 [2024-12-10 14:31:25.418624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.855 [2024-12-10 14:31:25.418631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.855 [2024-12-10 14:31:25.418637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.855 [2024-12-10 14:31:25.418652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.855 qpair failed and we were unable to recover it. 00:29:24.855 [2024-12-10 14:31:25.428608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.855 [2024-12-10 14:31:25.428692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.855 [2024-12-10 14:31:25.428705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.855 [2024-12-10 14:31:25.428712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.855 [2024-12-10 14:31:25.428718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.855 [2024-12-10 14:31:25.428735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.855 qpair failed and we were unable to recover it. 00:29:24.855 [2024-12-10 14:31:25.438671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.855 [2024-12-10 14:31:25.438721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.855 [2024-12-10 14:31:25.438735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.855 [2024-12-10 14:31:25.438742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.855 [2024-12-10 14:31:25.438748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.855 [2024-12-10 14:31:25.438763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.855 qpair failed and we were unable to recover it. 00:29:24.855 [2024-12-10 14:31:25.448754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.855 [2024-12-10 14:31:25.448809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.855 [2024-12-10 14:31:25.448822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.855 [2024-12-10 14:31:25.448829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.855 [2024-12-10 14:31:25.448835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.855 [2024-12-10 14:31:25.448849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.855 qpair failed and we were unable to recover it. 00:29:24.855 [2024-12-10 14:31:25.458667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.855 [2024-12-10 14:31:25.458718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.856 [2024-12-10 14:31:25.458732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.856 [2024-12-10 14:31:25.458739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.856 [2024-12-10 14:31:25.458745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.856 [2024-12-10 14:31:25.458759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.856 qpair failed and we were unable to recover it. 00:29:24.856 [2024-12-10 14:31:25.468771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.856 [2024-12-10 14:31:25.468857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.856 [2024-12-10 14:31:25.468870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.856 [2024-12-10 14:31:25.468877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.856 [2024-12-10 14:31:25.468883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.856 [2024-12-10 14:31:25.468897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.856 qpair failed and we were unable to recover it. 00:29:24.856 [2024-12-10 14:31:25.478780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.856 [2024-12-10 14:31:25.478839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.856 [2024-12-10 14:31:25.478852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.856 [2024-12-10 14:31:25.478859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.856 [2024-12-10 14:31:25.478865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.856 [2024-12-10 14:31:25.478879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.856 qpair failed and we were unable to recover it. 00:29:24.856 [2024-12-10 14:31:25.488770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.856 [2024-12-10 14:31:25.488847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.856 [2024-12-10 14:31:25.488860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.856 [2024-12-10 14:31:25.488867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.856 [2024-12-10 14:31:25.488873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.856 [2024-12-10 14:31:25.488888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.856 qpair failed and we were unable to recover it. 00:29:24.856 [2024-12-10 14:31:25.498803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.856 [2024-12-10 14:31:25.498875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.856 [2024-12-10 14:31:25.498888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.856 [2024-12-10 14:31:25.498896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.856 [2024-12-10 14:31:25.498901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.856 [2024-12-10 14:31:25.498917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.856 qpair failed and we were unable to recover it. 00:29:24.856 [2024-12-10 14:31:25.508821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.856 [2024-12-10 14:31:25.508881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.856 [2024-12-10 14:31:25.508894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.856 [2024-12-10 14:31:25.508902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.856 [2024-12-10 14:31:25.508908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.856 [2024-12-10 14:31:25.508922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.856 qpair failed and we were unable to recover it. 00:29:24.856 [2024-12-10 14:31:25.518958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.856 [2024-12-10 14:31:25.519009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.856 [2024-12-10 14:31:25.519022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.856 [2024-12-10 14:31:25.519034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.856 [2024-12-10 14:31:25.519041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.856 [2024-12-10 14:31:25.519055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.856 qpair failed and we were unable to recover it. 00:29:24.856 [2024-12-10 14:31:25.529008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.856 [2024-12-10 14:31:25.529083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.856 [2024-12-10 14:31:25.529096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.856 [2024-12-10 14:31:25.529103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.856 [2024-12-10 14:31:25.529109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.856 [2024-12-10 14:31:25.529124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.856 qpair failed and we were unable to recover it. 00:29:24.856 [2024-12-10 14:31:25.538964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.856 [2024-12-10 14:31:25.539018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.856 [2024-12-10 14:31:25.539031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.856 [2024-12-10 14:31:25.539038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.856 [2024-12-10 14:31:25.539045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.856 [2024-12-10 14:31:25.539060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.856 qpair failed and we were unable to recover it. 00:29:24.856 [2024-12-10 14:31:25.548995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.856 [2024-12-10 14:31:25.549091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.856 [2024-12-10 14:31:25.549104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.856 [2024-12-10 14:31:25.549111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.856 [2024-12-10 14:31:25.549117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.856 [2024-12-10 14:31:25.549131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.856 qpair failed and we were unable to recover it. 00:29:24.856 [2024-12-10 14:31:25.559012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.856 [2024-12-10 14:31:25.559075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.856 [2024-12-10 14:31:25.559090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.856 [2024-12-10 14:31:25.559098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.856 [2024-12-10 14:31:25.559104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.856 [2024-12-10 14:31:25.559122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.856 qpair failed and we were unable to recover it. 00:29:24.856 [2024-12-10 14:31:25.568972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.856 [2024-12-10 14:31:25.569027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.856 [2024-12-10 14:31:25.569039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.856 [2024-12-10 14:31:25.569046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.856 [2024-12-10 14:31:25.569053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.856 [2024-12-10 14:31:25.569067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.856 qpair failed and we were unable to recover it. 00:29:24.856 [2024-12-10 14:31:25.579023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.856 [2024-12-10 14:31:25.579079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.856 [2024-12-10 14:31:25.579093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.856 [2024-12-10 14:31:25.579100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.856 [2024-12-10 14:31:25.579106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.856 [2024-12-10 14:31:25.579120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.856 qpair failed and we were unable to recover it. 00:29:24.856 [2024-12-10 14:31:25.589115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.856 [2024-12-10 14:31:25.589214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.856 [2024-12-10 14:31:25.589232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.856 [2024-12-10 14:31:25.589239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.857 [2024-12-10 14:31:25.589245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:24.857 [2024-12-10 14:31:25.589260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.857 qpair failed and we were unable to recover it. 00:29:25.115 [2024-12-10 14:31:25.599141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.115 [2024-12-10 14:31:25.599189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.115 [2024-12-10 14:31:25.599202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.115 [2024-12-10 14:31:25.599209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.115 [2024-12-10 14:31:25.599215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.115 [2024-12-10 14:31:25.599234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.115 qpair failed and we were unable to recover it. 00:29:25.115 [2024-12-10 14:31:25.609175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.115 [2024-12-10 14:31:25.609234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.115 [2024-12-10 14:31:25.609247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.115 [2024-12-10 14:31:25.609254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.115 [2024-12-10 14:31:25.609261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.115 [2024-12-10 14:31:25.609275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.115 qpair failed and we were unable to recover it. 00:29:25.115 [2024-12-10 14:31:25.619161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.115 [2024-12-10 14:31:25.619220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.115 [2024-12-10 14:31:25.619234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.115 [2024-12-10 14:31:25.619241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.115 [2024-12-10 14:31:25.619247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.115 [2024-12-10 14:31:25.619261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.115 qpair failed and we were unable to recover it. 00:29:25.115 [2024-12-10 14:31:25.629235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.115 [2024-12-10 14:31:25.629288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.115 [2024-12-10 14:31:25.629301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.115 [2024-12-10 14:31:25.629308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.115 [2024-12-10 14:31:25.629314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.115 [2024-12-10 14:31:25.629329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.115 qpair failed and we were unable to recover it. 00:29:25.115 [2024-12-10 14:31:25.639263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.115 [2024-12-10 14:31:25.639320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.115 [2024-12-10 14:31:25.639334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.115 [2024-12-10 14:31:25.639341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.115 [2024-12-10 14:31:25.639347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.115 [2024-12-10 14:31:25.639361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.115 qpair failed and we were unable to recover it. 00:29:25.115 [2024-12-10 14:31:25.649231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.115 [2024-12-10 14:31:25.649299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.115 [2024-12-10 14:31:25.649315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.115 [2024-12-10 14:31:25.649322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.115 [2024-12-10 14:31:25.649328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.115 [2024-12-10 14:31:25.649343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.115 qpair failed and we were unable to recover it. 00:29:25.115 [2024-12-10 14:31:25.659317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.115 [2024-12-10 14:31:25.659376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.115 [2024-12-10 14:31:25.659389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.115 [2024-12-10 14:31:25.659396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.115 [2024-12-10 14:31:25.659403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.115 [2024-12-10 14:31:25.659417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.115 qpair failed and we were unable to recover it. 00:29:25.115 [2024-12-10 14:31:25.669261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.115 [2024-12-10 14:31:25.669328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.115 [2024-12-10 14:31:25.669342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.115 [2024-12-10 14:31:25.669349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.115 [2024-12-10 14:31:25.669356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.115 [2024-12-10 14:31:25.669370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.115 qpair failed and we were unable to recover it. 00:29:25.115 [2024-12-10 14:31:25.679302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.116 [2024-12-10 14:31:25.679358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.116 [2024-12-10 14:31:25.679371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.116 [2024-12-10 14:31:25.679378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.116 [2024-12-10 14:31:25.679385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.116 [2024-12-10 14:31:25.679398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.116 qpair failed and we were unable to recover it. 00:29:25.116 [2024-12-10 14:31:25.689432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.116 [2024-12-10 14:31:25.689485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.116 [2024-12-10 14:31:25.689498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.116 [2024-12-10 14:31:25.689505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.116 [2024-12-10 14:31:25.689514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.116 [2024-12-10 14:31:25.689529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.116 qpair failed and we were unable to recover it. 00:29:25.116 [2024-12-10 14:31:25.699356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.116 [2024-12-10 14:31:25.699412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.116 [2024-12-10 14:31:25.699425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.116 [2024-12-10 14:31:25.699432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.116 [2024-12-10 14:31:25.699438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.116 [2024-12-10 14:31:25.699453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.116 qpair failed and we were unable to recover it. 00:29:25.116 [2024-12-10 14:31:25.709452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.116 [2024-12-10 14:31:25.709504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.116 [2024-12-10 14:31:25.709517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.116 [2024-12-10 14:31:25.709524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.116 [2024-12-10 14:31:25.709531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.116 [2024-12-10 14:31:25.709545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.116 qpair failed and we were unable to recover it. 00:29:25.116 [2024-12-10 14:31:25.719501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.116 [2024-12-10 14:31:25.719579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.116 [2024-12-10 14:31:25.719592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.116 [2024-12-10 14:31:25.719599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.116 [2024-12-10 14:31:25.719605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.116 [2024-12-10 14:31:25.719619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.116 qpair failed and we were unable to recover it. 00:29:25.116 [2024-12-10 14:31:25.729552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.116 [2024-12-10 14:31:25.729610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.116 [2024-12-10 14:31:25.729623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.116 [2024-12-10 14:31:25.729630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.116 [2024-12-10 14:31:25.729636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.116 [2024-12-10 14:31:25.729651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.116 qpair failed and we were unable to recover it. 00:29:25.116 [2024-12-10 14:31:25.739540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.116 [2024-12-10 14:31:25.739598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.116 [2024-12-10 14:31:25.739610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.116 [2024-12-10 14:31:25.739617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.116 [2024-12-10 14:31:25.739624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.116 [2024-12-10 14:31:25.739638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.116 qpair failed and we were unable to recover it. 00:29:25.116 [2024-12-10 14:31:25.749481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.116 [2024-12-10 14:31:25.749536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.116 [2024-12-10 14:31:25.749549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.116 [2024-12-10 14:31:25.749556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.116 [2024-12-10 14:31:25.749563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.116 [2024-12-10 14:31:25.749576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.116 qpair failed and we were unable to recover it. 00:29:25.116 [2024-12-10 14:31:25.759579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.116 [2024-12-10 14:31:25.759626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.116 [2024-12-10 14:31:25.759639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.116 [2024-12-10 14:31:25.759646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.116 [2024-12-10 14:31:25.759652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.116 [2024-12-10 14:31:25.759666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.116 qpair failed and we were unable to recover it. 00:29:25.116 [2024-12-10 14:31:25.769622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.116 [2024-12-10 14:31:25.769715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.116 [2024-12-10 14:31:25.769727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.116 [2024-12-10 14:31:25.769734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.116 [2024-12-10 14:31:25.769740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.116 [2024-12-10 14:31:25.769754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.116 qpair failed and we were unable to recover it. 00:29:25.116 [2024-12-10 14:31:25.779632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.116 [2024-12-10 14:31:25.779688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.116 [2024-12-10 14:31:25.779704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.116 [2024-12-10 14:31:25.779712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.116 [2024-12-10 14:31:25.779718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.116 [2024-12-10 14:31:25.779732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.116 qpair failed and we were unable to recover it. 00:29:25.116 [2024-12-10 14:31:25.789673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.116 [2024-12-10 14:31:25.789723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.116 [2024-12-10 14:31:25.789736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.116 [2024-12-10 14:31:25.789743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.116 [2024-12-10 14:31:25.789750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.116 [2024-12-10 14:31:25.789765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.116 qpair failed and we were unable to recover it. 00:29:25.116 [2024-12-10 14:31:25.799736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.116 [2024-12-10 14:31:25.799813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.116 [2024-12-10 14:31:25.799827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.116 [2024-12-10 14:31:25.799834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.116 [2024-12-10 14:31:25.799841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.116 [2024-12-10 14:31:25.799855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.116 qpair failed and we were unable to recover it. 00:29:25.116 [2024-12-10 14:31:25.809774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.116 [2024-12-10 14:31:25.809830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.116 [2024-12-10 14:31:25.809843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.117 [2024-12-10 14:31:25.809850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.117 [2024-12-10 14:31:25.809857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.117 [2024-12-10 14:31:25.809872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.117 qpair failed and we were unable to recover it. 00:29:25.117 [2024-12-10 14:31:25.819708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.117 [2024-12-10 14:31:25.819777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.117 [2024-12-10 14:31:25.819790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.117 [2024-12-10 14:31:25.819797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.117 [2024-12-10 14:31:25.819806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.117 [2024-12-10 14:31:25.819822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.117 qpair failed and we were unable to recover it. 00:29:25.117 [2024-12-10 14:31:25.829767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.117 [2024-12-10 14:31:25.829816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.117 [2024-12-10 14:31:25.829829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.117 [2024-12-10 14:31:25.829836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.117 [2024-12-10 14:31:25.829842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.117 [2024-12-10 14:31:25.829857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.117 qpair failed and we were unable to recover it. 00:29:25.117 [2024-12-10 14:31:25.839847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.117 [2024-12-10 14:31:25.839924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.117 [2024-12-10 14:31:25.839937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.117 [2024-12-10 14:31:25.839945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.117 [2024-12-10 14:31:25.839950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.117 [2024-12-10 14:31:25.839964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.117 qpair failed and we were unable to recover it. 00:29:25.117 [2024-12-10 14:31:25.849852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.117 [2024-12-10 14:31:25.849907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.117 [2024-12-10 14:31:25.849920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.117 [2024-12-10 14:31:25.849928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.117 [2024-12-10 14:31:25.849934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.117 [2024-12-10 14:31:25.849948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.117 qpair failed and we were unable to recover it. 00:29:25.375 [2024-12-10 14:31:25.859809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.375 [2024-12-10 14:31:25.859864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.375 [2024-12-10 14:31:25.859878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.375 [2024-12-10 14:31:25.859885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.375 [2024-12-10 14:31:25.859891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.375 [2024-12-10 14:31:25.859907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-12-10 14:31:25.869922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.375 [2024-12-10 14:31:25.869974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.375 [2024-12-10 14:31:25.869987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.375 [2024-12-10 14:31:25.869993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.375 [2024-12-10 14:31:25.870000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.375 [2024-12-10 14:31:25.870015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-12-10 14:31:25.879949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.375 [2024-12-10 14:31:25.880014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.375 [2024-12-10 14:31:25.880028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.375 [2024-12-10 14:31:25.880034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.375 [2024-12-10 14:31:25.880040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.375 [2024-12-10 14:31:25.880055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-12-10 14:31:25.889946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.376 [2024-12-10 14:31:25.890002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.376 [2024-12-10 14:31:25.890015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.376 [2024-12-10 14:31:25.890022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.376 [2024-12-10 14:31:25.890028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.376 [2024-12-10 14:31:25.890042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-12-10 14:31:25.899990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.376 [2024-12-10 14:31:25.900047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.376 [2024-12-10 14:31:25.900060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.376 [2024-12-10 14:31:25.900067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.376 [2024-12-10 14:31:25.900073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.376 [2024-12-10 14:31:25.900088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-12-10 14:31:25.910017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.376 [2024-12-10 14:31:25.910073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.376 [2024-12-10 14:31:25.910086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.376 [2024-12-10 14:31:25.910093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.376 [2024-12-10 14:31:25.910100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.376 [2024-12-10 14:31:25.910114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-12-10 14:31:25.920042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.376 [2024-12-10 14:31:25.920092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.376 [2024-12-10 14:31:25.920105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.376 [2024-12-10 14:31:25.920112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.376 [2024-12-10 14:31:25.920118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.376 [2024-12-10 14:31:25.920133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-12-10 14:31:25.930075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.376 [2024-12-10 14:31:25.930134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.376 [2024-12-10 14:31:25.930147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.376 [2024-12-10 14:31:25.930154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.376 [2024-12-10 14:31:25.930160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.376 [2024-12-10 14:31:25.930175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-12-10 14:31:25.940137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.376 [2024-12-10 14:31:25.940210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.376 [2024-12-10 14:31:25.940316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.376 [2024-12-10 14:31:25.940331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.376 [2024-12-10 14:31:25.940337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.376 [2024-12-10 14:31:25.940371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-12-10 14:31:25.950108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.376 [2024-12-10 14:31:25.950163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.376 [2024-12-10 14:31:25.950176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.376 [2024-12-10 14:31:25.950187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.376 [2024-12-10 14:31:25.950193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.376 [2024-12-10 14:31:25.950208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-12-10 14:31:25.960152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.376 [2024-12-10 14:31:25.960210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.376 [2024-12-10 14:31:25.960227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.376 [2024-12-10 14:31:25.960235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.376 [2024-12-10 14:31:25.960241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.376 [2024-12-10 14:31:25.960256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-12-10 14:31:25.970192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.376 [2024-12-10 14:31:25.970263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.376 [2024-12-10 14:31:25.970276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.376 [2024-12-10 14:31:25.970283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.376 [2024-12-10 14:31:25.970290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.376 [2024-12-10 14:31:25.970305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-12-10 14:31:25.980224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.376 [2024-12-10 14:31:25.980309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.376 [2024-12-10 14:31:25.980322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.376 [2024-12-10 14:31:25.980330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.376 [2024-12-10 14:31:25.980336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.376 [2024-12-10 14:31:25.980352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-12-10 14:31:25.990250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.376 [2024-12-10 14:31:25.990306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.376 [2024-12-10 14:31:25.990319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.376 [2024-12-10 14:31:25.990326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.376 [2024-12-10 14:31:25.990333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.376 [2024-12-10 14:31:25.990351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-12-10 14:31:26.000310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.376 [2024-12-10 14:31:26.000375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.376 [2024-12-10 14:31:26.000389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.376 [2024-12-10 14:31:26.000397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.376 [2024-12-10 14:31:26.000403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.376 [2024-12-10 14:31:26.000418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-12-10 14:31:26.010261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.376 [2024-12-10 14:31:26.010316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.376 [2024-12-10 14:31:26.010329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.376 [2024-12-10 14:31:26.010336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.376 [2024-12-10 14:31:26.010343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.376 [2024-12-10 14:31:26.010358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-12-10 14:31:26.020324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.376 [2024-12-10 14:31:26.020382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.376 [2024-12-10 14:31:26.020395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.376 [2024-12-10 14:31:26.020402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.377 [2024-12-10 14:31:26.020409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.377 [2024-12-10 14:31:26.020423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-12-10 14:31:26.030346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.377 [2024-12-10 14:31:26.030403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.377 [2024-12-10 14:31:26.030417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.377 [2024-12-10 14:31:26.030424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.377 [2024-12-10 14:31:26.030430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.377 [2024-12-10 14:31:26.030445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-12-10 14:31:26.040393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.377 [2024-12-10 14:31:26.040449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.377 [2024-12-10 14:31:26.040462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.377 [2024-12-10 14:31:26.040469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.377 [2024-12-10 14:31:26.040475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.377 [2024-12-10 14:31:26.040490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-12-10 14:31:26.050420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.377 [2024-12-10 14:31:26.050474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.377 [2024-12-10 14:31:26.050487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.377 [2024-12-10 14:31:26.050495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.377 [2024-12-10 14:31:26.050501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.377 [2024-12-10 14:31:26.050516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-12-10 14:31:26.060433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.377 [2024-12-10 14:31:26.060490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.377 [2024-12-10 14:31:26.060503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.377 [2024-12-10 14:31:26.060510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.377 [2024-12-10 14:31:26.060517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.377 [2024-12-10 14:31:26.060531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-12-10 14:31:26.070494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.377 [2024-12-10 14:31:26.070551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.377 [2024-12-10 14:31:26.070564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.377 [2024-12-10 14:31:26.070572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.377 [2024-12-10 14:31:26.070578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.377 [2024-12-10 14:31:26.070593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-12-10 14:31:26.080504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.377 [2024-12-10 14:31:26.080568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.377 [2024-12-10 14:31:26.080581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.377 [2024-12-10 14:31:26.080591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.377 [2024-12-10 14:31:26.080598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.377 [2024-12-10 14:31:26.080612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-12-10 14:31:26.090455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.377 [2024-12-10 14:31:26.090508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.377 [2024-12-10 14:31:26.090521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.377 [2024-12-10 14:31:26.090528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.377 [2024-12-10 14:31:26.090534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.377 [2024-12-10 14:31:26.090549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-12-10 14:31:26.100546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.377 [2024-12-10 14:31:26.100606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.377 [2024-12-10 14:31:26.100619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.377 [2024-12-10 14:31:26.100626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.377 [2024-12-10 14:31:26.100633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.377 [2024-12-10 14:31:26.100647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-12-10 14:31:26.110613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.377 [2024-12-10 14:31:26.110668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.377 [2024-12-10 14:31:26.110682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.377 [2024-12-10 14:31:26.110689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.377 [2024-12-10 14:31:26.110695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.377 [2024-12-10 14:31:26.110709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.635 [2024-12-10 14:31:26.120643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.635 [2024-12-10 14:31:26.120706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.635 [2024-12-10 14:31:26.120719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.635 [2024-12-10 14:31:26.120727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.635 [2024-12-10 14:31:26.120733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.635 [2024-12-10 14:31:26.120751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.635 qpair failed and we were unable to recover it. 00:29:25.635 [2024-12-10 14:31:26.130696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.635 [2024-12-10 14:31:26.130793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.635 [2024-12-10 14:31:26.130806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.635 [2024-12-10 14:31:26.130813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.635 [2024-12-10 14:31:26.130819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.635 [2024-12-10 14:31:26.130834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.635 qpair failed and we were unable to recover it. 00:29:25.635 [2024-12-10 14:31:26.140673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.635 [2024-12-10 14:31:26.140727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.635 [2024-12-10 14:31:26.140740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.635 [2024-12-10 14:31:26.140747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.635 [2024-12-10 14:31:26.140754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.635 [2024-12-10 14:31:26.140768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.635 qpair failed and we were unable to recover it. 00:29:25.636 [2024-12-10 14:31:26.150711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.636 [2024-12-10 14:31:26.150758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.636 [2024-12-10 14:31:26.150771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.636 [2024-12-10 14:31:26.150778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.636 [2024-12-10 14:31:26.150784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.636 [2024-12-10 14:31:26.150798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.636 qpair failed and we were unable to recover it. 00:29:25.636 [2024-12-10 14:31:26.160709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.636 [2024-12-10 14:31:26.160761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.636 [2024-12-10 14:31:26.160774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.636 [2024-12-10 14:31:26.160781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.636 [2024-12-10 14:31:26.160787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.636 [2024-12-10 14:31:26.160802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.636 qpair failed and we were unable to recover it. 00:29:25.636 [2024-12-10 14:31:26.170754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.636 [2024-12-10 14:31:26.170819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.636 [2024-12-10 14:31:26.170832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.636 [2024-12-10 14:31:26.170839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.636 [2024-12-10 14:31:26.170845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.636 [2024-12-10 14:31:26.170860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.636 qpair failed and we were unable to recover it. 00:29:25.636 [2024-12-10 14:31:26.180778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.636 [2024-12-10 14:31:26.180833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.636 [2024-12-10 14:31:26.180846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.636 [2024-12-10 14:31:26.180853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.636 [2024-12-10 14:31:26.180859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.636 [2024-12-10 14:31:26.180873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.636 qpair failed and we were unable to recover it. 00:29:25.636 [2024-12-10 14:31:26.190842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.636 [2024-12-10 14:31:26.190893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.636 [2024-12-10 14:31:26.190906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.636 [2024-12-10 14:31:26.190913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.636 [2024-12-10 14:31:26.190919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.636 [2024-12-10 14:31:26.190935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.636 qpair failed and we were unable to recover it. 00:29:25.636 [2024-12-10 14:31:26.200824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.636 [2024-12-10 14:31:26.200877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.636 [2024-12-10 14:31:26.200890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.636 [2024-12-10 14:31:26.200897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.636 [2024-12-10 14:31:26.200903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.636 [2024-12-10 14:31:26.200918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.636 qpair failed and we were unable to recover it. 00:29:25.636 [2024-12-10 14:31:26.210799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.636 [2024-12-10 14:31:26.210856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.636 [2024-12-10 14:31:26.210872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.636 [2024-12-10 14:31:26.210881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.636 [2024-12-10 14:31:26.210887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.636 [2024-12-10 14:31:26.210900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.636 qpair failed and we were unable to recover it. 00:29:25.636 [2024-12-10 14:31:26.220897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.636 [2024-12-10 14:31:26.220969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.636 [2024-12-10 14:31:26.220983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.636 [2024-12-10 14:31:26.220991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.636 [2024-12-10 14:31:26.220997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.636 [2024-12-10 14:31:26.221011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.636 qpair failed and we were unable to recover it. 00:29:25.636 [2024-12-10 14:31:26.230934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.636 [2024-12-10 14:31:26.230996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.636 [2024-12-10 14:31:26.231009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.636 [2024-12-10 14:31:26.231016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.636 [2024-12-10 14:31:26.231023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.636 [2024-12-10 14:31:26.231037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.636 qpair failed and we were unable to recover it. 00:29:25.636 [2024-12-10 14:31:26.240966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.636 [2024-12-10 14:31:26.241019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.636 [2024-12-10 14:31:26.241033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.636 [2024-12-10 14:31:26.241040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.636 [2024-12-10 14:31:26.241046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.636 [2024-12-10 14:31:26.241061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.636 qpair failed and we were unable to recover it. 00:29:25.636 [2024-12-10 14:31:26.250958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.636 [2024-12-10 14:31:26.251015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.636 [2024-12-10 14:31:26.251028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.636 [2024-12-10 14:31:26.251035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.636 [2024-12-10 14:31:26.251047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.636 [2024-12-10 14:31:26.251061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.636 qpair failed and we were unable to recover it. 00:29:25.636 [2024-12-10 14:31:26.261012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.636 [2024-12-10 14:31:26.261068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.636 [2024-12-10 14:31:26.261081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.636 [2024-12-10 14:31:26.261088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.637 [2024-12-10 14:31:26.261095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.637 [2024-12-10 14:31:26.261110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.637 qpair failed and we were unable to recover it. 00:29:25.637 [2024-12-10 14:31:26.271099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.637 [2024-12-10 14:31:26.271200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.637 [2024-12-10 14:31:26.271215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.637 [2024-12-10 14:31:26.271227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.637 [2024-12-10 14:31:26.271233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.637 [2024-12-10 14:31:26.271248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.637 qpair failed and we were unable to recover it. 00:29:25.637 [2024-12-10 14:31:26.281046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.637 [2024-12-10 14:31:26.281099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.637 [2024-12-10 14:31:26.281112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.637 [2024-12-10 14:31:26.281119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.637 [2024-12-10 14:31:26.281125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.637 [2024-12-10 14:31:26.281139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.637 qpair failed and we were unable to recover it. 00:29:25.637 [2024-12-10 14:31:26.291108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.637 [2024-12-10 14:31:26.291166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.637 [2024-12-10 14:31:26.291179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.637 [2024-12-10 14:31:26.291186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.637 [2024-12-10 14:31:26.291192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.637 [2024-12-10 14:31:26.291206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.637 qpair failed and we were unable to recover it. 00:29:25.637 [2024-12-10 14:31:26.301138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.637 [2024-12-10 14:31:26.301194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.637 [2024-12-10 14:31:26.301207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.637 [2024-12-10 14:31:26.301214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.637 [2024-12-10 14:31:26.301225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.637 [2024-12-10 14:31:26.301241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.637 qpair failed and we were unable to recover it. 00:29:25.637 [2024-12-10 14:31:26.311125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.637 [2024-12-10 14:31:26.311178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.637 [2024-12-10 14:31:26.311191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.637 [2024-12-10 14:31:26.311198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.637 [2024-12-10 14:31:26.311204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.637 [2024-12-10 14:31:26.311222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.637 qpair failed and we were unable to recover it. 00:29:25.637 [2024-12-10 14:31:26.321176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.637 [2024-12-10 14:31:26.321242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.637 [2024-12-10 14:31:26.321255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.637 [2024-12-10 14:31:26.321262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.637 [2024-12-10 14:31:26.321268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.637 [2024-12-10 14:31:26.321283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.637 qpair failed and we were unable to recover it. 00:29:25.637 [2024-12-10 14:31:26.331178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.637 [2024-12-10 14:31:26.331279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.637 [2024-12-10 14:31:26.331292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.637 [2024-12-10 14:31:26.331299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.637 [2024-12-10 14:31:26.331305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.637 [2024-12-10 14:31:26.331320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.637 qpair failed and we were unable to recover it. 00:29:25.637 [2024-12-10 14:31:26.341219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.637 [2024-12-10 14:31:26.341276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.637 [2024-12-10 14:31:26.341292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.637 [2024-12-10 14:31:26.341299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.637 [2024-12-10 14:31:26.341305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.637 [2024-12-10 14:31:26.341319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.637 qpair failed and we were unable to recover it. 00:29:25.637 [2024-12-10 14:31:26.351269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.637 [2024-12-10 14:31:26.351357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.637 [2024-12-10 14:31:26.351370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.637 [2024-12-10 14:31:26.351377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.637 [2024-12-10 14:31:26.351383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.637 [2024-12-10 14:31:26.351397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.637 qpair failed and we were unable to recover it. 00:29:25.637 [2024-12-10 14:31:26.361279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.637 [2024-12-10 14:31:26.361331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.637 [2024-12-10 14:31:26.361344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.637 [2024-12-10 14:31:26.361352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.637 [2024-12-10 14:31:26.361358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.637 [2024-12-10 14:31:26.361372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.637 qpair failed and we were unable to recover it. 00:29:25.637 [2024-12-10 14:31:26.371325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.637 [2024-12-10 14:31:26.371382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.637 [2024-12-10 14:31:26.371396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.637 [2024-12-10 14:31:26.371403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.637 [2024-12-10 14:31:26.371409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.637 [2024-12-10 14:31:26.371423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.637 qpair failed and we were unable to recover it. 00:29:25.895 [2024-12-10 14:31:26.381344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.895 [2024-12-10 14:31:26.381402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.895 [2024-12-10 14:31:26.381415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.895 [2024-12-10 14:31:26.381422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.895 [2024-12-10 14:31:26.381431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.895 [2024-12-10 14:31:26.381446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.895 qpair failed and we were unable to recover it. 00:29:25.895 [2024-12-10 14:31:26.391397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.895 [2024-12-10 14:31:26.391449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.895 [2024-12-10 14:31:26.391462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.895 [2024-12-10 14:31:26.391470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.895 [2024-12-10 14:31:26.391476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.895 [2024-12-10 14:31:26.391490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.895 qpair failed and we were unable to recover it. 00:29:25.895 [2024-12-10 14:31:26.401471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.895 [2024-12-10 14:31:26.401547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.895 [2024-12-10 14:31:26.401560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.895 [2024-12-10 14:31:26.401567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.895 [2024-12-10 14:31:26.401573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.895 [2024-12-10 14:31:26.401588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.895 qpair failed and we were unable to recover it. 00:29:25.895 [2024-12-10 14:31:26.411429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.895 [2024-12-10 14:31:26.411487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.895 [2024-12-10 14:31:26.411500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.895 [2024-12-10 14:31:26.411507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.895 [2024-12-10 14:31:26.411513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.895 [2024-12-10 14:31:26.411528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.895 qpair failed and we were unable to recover it. 00:29:25.896 [2024-12-10 14:31:26.421481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.896 [2024-12-10 14:31:26.421543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.896 [2024-12-10 14:31:26.421556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.896 [2024-12-10 14:31:26.421564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.896 [2024-12-10 14:31:26.421569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.896 [2024-12-10 14:31:26.421584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.896 qpair failed and we were unable to recover it. 00:29:25.896 [2024-12-10 14:31:26.431478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.896 [2024-12-10 14:31:26.431529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.896 [2024-12-10 14:31:26.431542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.896 [2024-12-10 14:31:26.431549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.896 [2024-12-10 14:31:26.431555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.896 [2024-12-10 14:31:26.431570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.896 qpair failed and we were unable to recover it. 00:29:25.896 [2024-12-10 14:31:26.441438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.896 [2024-12-10 14:31:26.441531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.896 [2024-12-10 14:31:26.441544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.896 [2024-12-10 14:31:26.441551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.896 [2024-12-10 14:31:26.441558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.896 [2024-12-10 14:31:26.441572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.896 qpair failed and we were unable to recover it. 00:29:25.896 [2024-12-10 14:31:26.451558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.896 [2024-12-10 14:31:26.451659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.896 [2024-12-10 14:31:26.451672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.896 [2024-12-10 14:31:26.451679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.896 [2024-12-10 14:31:26.451685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.896 [2024-12-10 14:31:26.451700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.896 qpair failed and we were unable to recover it. 00:29:25.896 [2024-12-10 14:31:26.461585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.896 [2024-12-10 14:31:26.461642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.896 [2024-12-10 14:31:26.461655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.896 [2024-12-10 14:31:26.461663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.896 [2024-12-10 14:31:26.461669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.896 [2024-12-10 14:31:26.461683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.896 qpair failed and we were unable to recover it. 00:29:25.896 [2024-12-10 14:31:26.471610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.896 [2024-12-10 14:31:26.471665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.896 [2024-12-10 14:31:26.471678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.896 [2024-12-10 14:31:26.471685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.896 [2024-12-10 14:31:26.471693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.896 [2024-12-10 14:31:26.471707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.896 qpair failed and we were unable to recover it. 00:29:25.896 [2024-12-10 14:31:26.481670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.896 [2024-12-10 14:31:26.481751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.896 [2024-12-10 14:31:26.481765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.896 [2024-12-10 14:31:26.481772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.896 [2024-12-10 14:31:26.481778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.896 [2024-12-10 14:31:26.481792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.896 qpair failed and we were unable to recover it. 00:29:25.896 [2024-12-10 14:31:26.491674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.896 [2024-12-10 14:31:26.491731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.896 [2024-12-10 14:31:26.491744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.896 [2024-12-10 14:31:26.491751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.896 [2024-12-10 14:31:26.491757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.896 [2024-12-10 14:31:26.491771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.896 qpair failed and we were unable to recover it. 00:29:25.896 [2024-12-10 14:31:26.501726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.896 [2024-12-10 14:31:26.501783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.896 [2024-12-10 14:31:26.501795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.896 [2024-12-10 14:31:26.501803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.896 [2024-12-10 14:31:26.501809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.896 [2024-12-10 14:31:26.501824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.896 qpair failed and we were unable to recover it. 00:29:25.896 [2024-12-10 14:31:26.511705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.896 [2024-12-10 14:31:26.511755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.896 [2024-12-10 14:31:26.511768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.896 [2024-12-10 14:31:26.511778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.896 [2024-12-10 14:31:26.511784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.896 [2024-12-10 14:31:26.511799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.896 qpair failed and we were unable to recover it. 00:29:25.896 [2024-12-10 14:31:26.521746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.896 [2024-12-10 14:31:26.521808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.896 [2024-12-10 14:31:26.521821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.896 [2024-12-10 14:31:26.521828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.896 [2024-12-10 14:31:26.521834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.896 [2024-12-10 14:31:26.521849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.896 qpair failed and we were unable to recover it. 00:29:25.896 [2024-12-10 14:31:26.531785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.896 [2024-12-10 14:31:26.531840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.896 [2024-12-10 14:31:26.531853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.896 [2024-12-10 14:31:26.531860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.896 [2024-12-10 14:31:26.531866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.896 [2024-12-10 14:31:26.531881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.896 qpair failed and we were unable to recover it. 00:29:25.896 [2024-12-10 14:31:26.541811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.896 [2024-12-10 14:31:26.541866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.896 [2024-12-10 14:31:26.541879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.896 [2024-12-10 14:31:26.541886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.896 [2024-12-10 14:31:26.541892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.896 [2024-12-10 14:31:26.541906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.896 qpair failed and we were unable to recover it. 00:29:25.896 [2024-12-10 14:31:26.551842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.896 [2024-12-10 14:31:26.551945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.896 [2024-12-10 14:31:26.551958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.897 [2024-12-10 14:31:26.551965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.897 [2024-12-10 14:31:26.551971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.897 [2024-12-10 14:31:26.551988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.897 qpair failed and we were unable to recover it. 00:29:25.897 [2024-12-10 14:31:26.561863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.897 [2024-12-10 14:31:26.561928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.897 [2024-12-10 14:31:26.561941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.897 [2024-12-10 14:31:26.561949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.897 [2024-12-10 14:31:26.561955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.897 [2024-12-10 14:31:26.561969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.897 qpair failed and we were unable to recover it. 00:29:25.897 [2024-12-10 14:31:26.571886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.897 [2024-12-10 14:31:26.571939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.897 [2024-12-10 14:31:26.571952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.897 [2024-12-10 14:31:26.571958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.897 [2024-12-10 14:31:26.571964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.897 [2024-12-10 14:31:26.571979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.897 qpair failed and we were unable to recover it. 00:29:25.897 [2024-12-10 14:31:26.581858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.897 [2024-12-10 14:31:26.581950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.897 [2024-12-10 14:31:26.581963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.897 [2024-12-10 14:31:26.581970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.897 [2024-12-10 14:31:26.581976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.897 [2024-12-10 14:31:26.581991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.897 qpair failed and we were unable to recover it. 00:29:25.897 [2024-12-10 14:31:26.591938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.897 [2024-12-10 14:31:26.591991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.897 [2024-12-10 14:31:26.592003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.897 [2024-12-10 14:31:26.592011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.897 [2024-12-10 14:31:26.592017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.897 [2024-12-10 14:31:26.592032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.897 qpair failed and we were unable to recover it. 00:29:25.897 [2024-12-10 14:31:26.601983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.897 [2024-12-10 14:31:26.602037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.897 [2024-12-10 14:31:26.602051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.897 [2024-12-10 14:31:26.602058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.897 [2024-12-10 14:31:26.602064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.897 [2024-12-10 14:31:26.602078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.897 qpair failed and we were unable to recover it. 00:29:25.897 [2024-12-10 14:31:26.612037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.897 [2024-12-10 14:31:26.612094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.897 [2024-12-10 14:31:26.612107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.897 [2024-12-10 14:31:26.612114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.897 [2024-12-10 14:31:26.612120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.897 [2024-12-10 14:31:26.612135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.897 qpair failed and we were unable to recover it. 00:29:25.897 [2024-12-10 14:31:26.622038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.897 [2024-12-10 14:31:26.622087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.897 [2024-12-10 14:31:26.622100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.897 [2024-12-10 14:31:26.622107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.897 [2024-12-10 14:31:26.622113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.897 [2024-12-10 14:31:26.622128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.897 qpair failed and we were unable to recover it. 00:29:25.897 [2024-12-10 14:31:26.632009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.897 [2024-12-10 14:31:26.632064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.897 [2024-12-10 14:31:26.632077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.897 [2024-12-10 14:31:26.632084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.897 [2024-12-10 14:31:26.632090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:25.897 [2024-12-10 14:31:26.632105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.897 qpair failed and we were unable to recover it. 00:29:26.155 [2024-12-10 14:31:26.642078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.155 [2024-12-10 14:31:26.642126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.155 [2024-12-10 14:31:26.642139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.155 [2024-12-10 14:31:26.642149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.155 [2024-12-10 14:31:26.642155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.155 [2024-12-10 14:31:26.642170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.155 qpair failed and we were unable to recover it. 00:29:26.155 [2024-12-10 14:31:26.652112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.155 [2024-12-10 14:31:26.652166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.155 [2024-12-10 14:31:26.652178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.155 [2024-12-10 14:31:26.652185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.155 [2024-12-10 14:31:26.652191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.155 [2024-12-10 14:31:26.652206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.155 qpair failed and we were unable to recover it. 00:29:26.155 [2024-12-10 14:31:26.662169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.155 [2024-12-10 14:31:26.662228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.155 [2024-12-10 14:31:26.662241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.155 [2024-12-10 14:31:26.662249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.155 [2024-12-10 14:31:26.662255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.155 [2024-12-10 14:31:26.662269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.155 qpair failed and we were unable to recover it. 00:29:26.155 [2024-12-10 14:31:26.672229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.155 [2024-12-10 14:31:26.672285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.155 [2024-12-10 14:31:26.672298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.155 [2024-12-10 14:31:26.672306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.155 [2024-12-10 14:31:26.672313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.155 [2024-12-10 14:31:26.672327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.155 qpair failed and we were unable to recover it. 00:29:26.155 [2024-12-10 14:31:26.682207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.155 [2024-12-10 14:31:26.682264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.155 [2024-12-10 14:31:26.682277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.155 [2024-12-10 14:31:26.682283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.155 [2024-12-10 14:31:26.682290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.155 [2024-12-10 14:31:26.682307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.156 qpair failed and we were unable to recover it. 00:29:26.156 [2024-12-10 14:31:26.692251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.156 [2024-12-10 14:31:26.692308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.156 [2024-12-10 14:31:26.692321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.156 [2024-12-10 14:31:26.692328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.156 [2024-12-10 14:31:26.692335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.156 [2024-12-10 14:31:26.692349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.156 qpair failed and we were unable to recover it. 00:29:26.156 [2024-12-10 14:31:26.702263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.156 [2024-12-10 14:31:26.702316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.156 [2024-12-10 14:31:26.702328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.156 [2024-12-10 14:31:26.702335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.156 [2024-12-10 14:31:26.702342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.156 [2024-12-10 14:31:26.702357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.156 qpair failed and we were unable to recover it. 00:29:26.156 [2024-12-10 14:31:26.712292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.156 [2024-12-10 14:31:26.712343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.156 [2024-12-10 14:31:26.712356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.156 [2024-12-10 14:31:26.712363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.156 [2024-12-10 14:31:26.712369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.156 [2024-12-10 14:31:26.712384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.156 qpair failed and we were unable to recover it. 00:29:26.156 [2024-12-10 14:31:26.722338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.156 [2024-12-10 14:31:26.722394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.156 [2024-12-10 14:31:26.722409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.156 [2024-12-10 14:31:26.722417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.156 [2024-12-10 14:31:26.722423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.156 [2024-12-10 14:31:26.722438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.156 qpair failed and we were unable to recover it. 00:29:26.156 [2024-12-10 14:31:26.732412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.156 [2024-12-10 14:31:26.732517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.156 [2024-12-10 14:31:26.732530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.156 [2024-12-10 14:31:26.732537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.156 [2024-12-10 14:31:26.732544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.156 [2024-12-10 14:31:26.732559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.156 qpair failed and we were unable to recover it. 00:29:26.156 [2024-12-10 14:31:26.742393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.156 [2024-12-10 14:31:26.742444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.156 [2024-12-10 14:31:26.742457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.156 [2024-12-10 14:31:26.742464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.156 [2024-12-10 14:31:26.742471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.156 [2024-12-10 14:31:26.742484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.156 qpair failed and we were unable to recover it. 00:29:26.156 [2024-12-10 14:31:26.752418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.156 [2024-12-10 14:31:26.752472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.156 [2024-12-10 14:31:26.752485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.156 [2024-12-10 14:31:26.752492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.156 [2024-12-10 14:31:26.752499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.156 [2024-12-10 14:31:26.752513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.156 qpair failed and we were unable to recover it. 00:29:26.156 [2024-12-10 14:31:26.762441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.156 [2024-12-10 14:31:26.762497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.156 [2024-12-10 14:31:26.762511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.156 [2024-12-10 14:31:26.762518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.156 [2024-12-10 14:31:26.762525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.156 [2024-12-10 14:31:26.762540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.156 qpair failed and we were unable to recover it. 00:29:26.156 [2024-12-10 14:31:26.772466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.156 [2024-12-10 14:31:26.772533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.156 [2024-12-10 14:31:26.772549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.156 [2024-12-10 14:31:26.772557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.156 [2024-12-10 14:31:26.772563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.156 [2024-12-10 14:31:26.772578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.156 qpair failed and we were unable to recover it. 00:29:26.156 [2024-12-10 14:31:26.782512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.156 [2024-12-10 14:31:26.782566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.156 [2024-12-10 14:31:26.782579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.156 [2024-12-10 14:31:26.782586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.156 [2024-12-10 14:31:26.782592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.156 [2024-12-10 14:31:26.782607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.156 qpair failed and we were unable to recover it. 00:29:26.156 [2024-12-10 14:31:26.792504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.156 [2024-12-10 14:31:26.792562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.156 [2024-12-10 14:31:26.792575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.156 [2024-12-10 14:31:26.792583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.156 [2024-12-10 14:31:26.792589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.156 [2024-12-10 14:31:26.792604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.156 qpair failed and we were unable to recover it. 00:29:26.156 [2024-12-10 14:31:26.802492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.156 [2024-12-10 14:31:26.802547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.156 [2024-12-10 14:31:26.802560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.156 [2024-12-10 14:31:26.802567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.156 [2024-12-10 14:31:26.802574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.156 [2024-12-10 14:31:26.802588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.156 qpair failed and we were unable to recover it. 00:29:26.156 [2024-12-10 14:31:26.812592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.156 [2024-12-10 14:31:26.812646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.156 [2024-12-10 14:31:26.812660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.156 [2024-12-10 14:31:26.812667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.156 [2024-12-10 14:31:26.812676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.156 [2024-12-10 14:31:26.812691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.156 qpair failed and we were unable to recover it. 00:29:26.156 [2024-12-10 14:31:26.822559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.157 [2024-12-10 14:31:26.822614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.157 [2024-12-10 14:31:26.822628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.157 [2024-12-10 14:31:26.822634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.157 [2024-12-10 14:31:26.822640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.157 [2024-12-10 14:31:26.822656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.157 qpair failed and we were unable to recover it. 00:29:26.157 [2024-12-10 14:31:26.832612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.157 [2024-12-10 14:31:26.832664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.157 [2024-12-10 14:31:26.832677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.157 [2024-12-10 14:31:26.832684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.157 [2024-12-10 14:31:26.832690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.157 [2024-12-10 14:31:26.832706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.157 qpair failed and we were unable to recover it. 00:29:26.157 [2024-12-10 14:31:26.842671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.157 [2024-12-10 14:31:26.842725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.157 [2024-12-10 14:31:26.842739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.157 [2024-12-10 14:31:26.842746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.157 [2024-12-10 14:31:26.842752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.157 [2024-12-10 14:31:26.842767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.157 qpair failed and we were unable to recover it. 00:29:26.157 [2024-12-10 14:31:26.852732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.157 [2024-12-10 14:31:26.852809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.157 [2024-12-10 14:31:26.852823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.157 [2024-12-10 14:31:26.852831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.157 [2024-12-10 14:31:26.852838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.157 [2024-12-10 14:31:26.852855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.157 qpair failed and we were unable to recover it. 00:29:26.157 [2024-12-10 14:31:26.862738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.157 [2024-12-10 14:31:26.862794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.157 [2024-12-10 14:31:26.862807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.157 [2024-12-10 14:31:26.862815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.157 [2024-12-10 14:31:26.862821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.157 [2024-12-10 14:31:26.862836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.157 qpair failed and we were unable to recover it. 00:29:26.157 [2024-12-10 14:31:26.872680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.157 [2024-12-10 14:31:26.872736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.157 [2024-12-10 14:31:26.872749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.157 [2024-12-10 14:31:26.872756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.157 [2024-12-10 14:31:26.872763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.157 [2024-12-10 14:31:26.872777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.157 qpair failed and we were unable to recover it. 00:29:26.157 [2024-12-10 14:31:26.882767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.157 [2024-12-10 14:31:26.882845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.157 [2024-12-10 14:31:26.882859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.157 [2024-12-10 14:31:26.882866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.157 [2024-12-10 14:31:26.882873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.157 [2024-12-10 14:31:26.882887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.157 qpair failed and we were unable to recover it. 00:29:26.157 [2024-12-10 14:31:26.892780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.157 [2024-12-10 14:31:26.892835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.157 [2024-12-10 14:31:26.892848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.157 [2024-12-10 14:31:26.892855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.157 [2024-12-10 14:31:26.892861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.157 [2024-12-10 14:31:26.892876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.157 qpair failed and we were unable to recover it. 00:29:26.414 [2024-12-10 14:31:26.902751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.414 [2024-12-10 14:31:26.902803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.414 [2024-12-10 14:31:26.902819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.414 [2024-12-10 14:31:26.902826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.414 [2024-12-10 14:31:26.902832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.414 [2024-12-10 14:31:26.902847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.414 qpair failed and we were unable to recover it. 00:29:26.414 [2024-12-10 14:31:26.912794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.414 [2024-12-10 14:31:26.912847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.414 [2024-12-10 14:31:26.912860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.414 [2024-12-10 14:31:26.912867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.414 [2024-12-10 14:31:26.912873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.414 [2024-12-10 14:31:26.912887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.414 qpair failed and we were unable to recover it. 00:29:26.414 [2024-12-10 14:31:26.922892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.414 [2024-12-10 14:31:26.922989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.414 [2024-12-10 14:31:26.923003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.414 [2024-12-10 14:31:26.923010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.414 [2024-12-10 14:31:26.923016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.414 [2024-12-10 14:31:26.923030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.414 qpair failed and we were unable to recover it. 00:29:26.414 [2024-12-10 14:31:26.932943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.414 [2024-12-10 14:31:26.933005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.414 [2024-12-10 14:31:26.933018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.414 [2024-12-10 14:31:26.933025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.414 [2024-12-10 14:31:26.933031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.414 [2024-12-10 14:31:26.933046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.414 qpair failed and we were unable to recover it. 00:29:26.414 [2024-12-10 14:31:26.942943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.414 [2024-12-10 14:31:26.942994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.414 [2024-12-10 14:31:26.943007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.414 [2024-12-10 14:31:26.943014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.414 [2024-12-10 14:31:26.943023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.414 [2024-12-10 14:31:26.943037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.414 qpair failed and we were unable to recover it. 00:29:26.414 [2024-12-10 14:31:26.952998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.414 [2024-12-10 14:31:26.953052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.414 [2024-12-10 14:31:26.953066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.414 [2024-12-10 14:31:26.953073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.414 [2024-12-10 14:31:26.953079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.414 [2024-12-10 14:31:26.953093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.414 qpair failed and we were unable to recover it. 00:29:26.414 [2024-12-10 14:31:26.963006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.414 [2024-12-10 14:31:26.963058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.414 [2024-12-10 14:31:26.963072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.414 [2024-12-10 14:31:26.963079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.414 [2024-12-10 14:31:26.963085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.414 [2024-12-10 14:31:26.963099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.414 qpair failed and we were unable to recover it. 00:29:26.414 [2024-12-10 14:31:26.973072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.414 [2024-12-10 14:31:26.973128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.414 [2024-12-10 14:31:26.973140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.414 [2024-12-10 14:31:26.973147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.414 [2024-12-10 14:31:26.973153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.414 [2024-12-10 14:31:26.973168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.414 qpair failed and we were unable to recover it. 00:29:26.414 [2024-12-10 14:31:26.983035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.414 [2024-12-10 14:31:26.983088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.414 [2024-12-10 14:31:26.983101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.414 [2024-12-10 14:31:26.983108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.414 [2024-12-10 14:31:26.983114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.414 [2024-12-10 14:31:26.983129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.414 qpair failed and we were unable to recover it. 00:29:26.414 [2024-12-10 14:31:26.993098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.414 [2024-12-10 14:31:26.993152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.414 [2024-12-10 14:31:26.993165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.414 [2024-12-10 14:31:26.993172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.414 [2024-12-10 14:31:26.993178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.414 [2024-12-10 14:31:26.993193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.414 qpair failed and we were unable to recover it. 00:29:26.414 [2024-12-10 14:31:27.003123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.414 [2024-12-10 14:31:27.003174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.414 [2024-12-10 14:31:27.003187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.414 [2024-12-10 14:31:27.003194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.414 [2024-12-10 14:31:27.003201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.414 [2024-12-10 14:31:27.003215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.414 qpair failed and we were unable to recover it. 00:29:26.414 [2024-12-10 14:31:27.013206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.414 [2024-12-10 14:31:27.013305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.415 [2024-12-10 14:31:27.013318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.415 [2024-12-10 14:31:27.013326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.415 [2024-12-10 14:31:27.013331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.415 [2024-12-10 14:31:27.013346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.415 qpair failed and we were unable to recover it. 00:29:26.415 [2024-12-10 14:31:27.023190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.415 [2024-12-10 14:31:27.023254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.415 [2024-12-10 14:31:27.023267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.415 [2024-12-10 14:31:27.023275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.415 [2024-12-10 14:31:27.023281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.415 [2024-12-10 14:31:27.023296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.415 qpair failed and we were unable to recover it. 00:29:26.415 [2024-12-10 14:31:27.033209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.415 [2024-12-10 14:31:27.033272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.415 [2024-12-10 14:31:27.033285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.415 [2024-12-10 14:31:27.033292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.415 [2024-12-10 14:31:27.033298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.415 [2024-12-10 14:31:27.033312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.415 qpair failed and we were unable to recover it. 00:29:26.415 [2024-12-10 14:31:27.043240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.415 [2024-12-10 14:31:27.043298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.415 [2024-12-10 14:31:27.043310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.415 [2024-12-10 14:31:27.043317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.415 [2024-12-10 14:31:27.043324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.415 [2024-12-10 14:31:27.043338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.415 qpair failed and we were unable to recover it. 00:29:26.415 [2024-12-10 14:31:27.053272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.415 [2024-12-10 14:31:27.053347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.415 [2024-12-10 14:31:27.053361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.415 [2024-12-10 14:31:27.053368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.415 [2024-12-10 14:31:27.053374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.415 [2024-12-10 14:31:27.053388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.415 qpair failed and we were unable to recover it. 00:29:26.415 [2024-12-10 14:31:27.063291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.415 [2024-12-10 14:31:27.063347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.415 [2024-12-10 14:31:27.063360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.415 [2024-12-10 14:31:27.063366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.415 [2024-12-10 14:31:27.063373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.415 [2024-12-10 14:31:27.063388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.415 qpair failed and we were unable to recover it. 00:29:26.415 [2024-12-10 14:31:27.073308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.415 [2024-12-10 14:31:27.073360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.415 [2024-12-10 14:31:27.073373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.415 [2024-12-10 14:31:27.073383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.415 [2024-12-10 14:31:27.073389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.415 [2024-12-10 14:31:27.073404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.415 qpair failed and we were unable to recover it. 00:29:26.415 [2024-12-10 14:31:27.083345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.415 [2024-12-10 14:31:27.083399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.415 [2024-12-10 14:31:27.083412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.415 [2024-12-10 14:31:27.083418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.415 [2024-12-10 14:31:27.083425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.415 [2024-12-10 14:31:27.083439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.415 qpair failed and we were unable to recover it. 00:29:26.415 [2024-12-10 14:31:27.093414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.415 [2024-12-10 14:31:27.093469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.415 [2024-12-10 14:31:27.093482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.415 [2024-12-10 14:31:27.093489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.415 [2024-12-10 14:31:27.093496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.415 [2024-12-10 14:31:27.093511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.415 qpair failed and we were unable to recover it. 00:29:26.415 [2024-12-10 14:31:27.103361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.415 [2024-12-10 14:31:27.103449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.415 [2024-12-10 14:31:27.103462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.415 [2024-12-10 14:31:27.103469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.415 [2024-12-10 14:31:27.103475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.415 [2024-12-10 14:31:27.103490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.415 qpair failed and we were unable to recover it. 00:29:26.415 [2024-12-10 14:31:27.113457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.415 [2024-12-10 14:31:27.113558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.415 [2024-12-10 14:31:27.113571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.415 [2024-12-10 14:31:27.113578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.415 [2024-12-10 14:31:27.113584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.415 [2024-12-10 14:31:27.113602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.415 qpair failed and we were unable to recover it. 00:29:26.415 [2024-12-10 14:31:27.123474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.415 [2024-12-10 14:31:27.123533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.415 [2024-12-10 14:31:27.123546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.415 [2024-12-10 14:31:27.123553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.415 [2024-12-10 14:31:27.123560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.415 [2024-12-10 14:31:27.123573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.415 qpair failed and we were unable to recover it. 00:29:26.415 [2024-12-10 14:31:27.133488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.415 [2024-12-10 14:31:27.133544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.415 [2024-12-10 14:31:27.133557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.415 [2024-12-10 14:31:27.133564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.415 [2024-12-10 14:31:27.133570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.415 [2024-12-10 14:31:27.133585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.415 qpair failed and we were unable to recover it. 00:29:26.415 [2024-12-10 14:31:27.143476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.415 [2024-12-10 14:31:27.143568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.415 [2024-12-10 14:31:27.143581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.415 [2024-12-10 14:31:27.143588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.415 [2024-12-10 14:31:27.143594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.415 [2024-12-10 14:31:27.143609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.415 qpair failed and we were unable to recover it. 00:29:26.672 [2024-12-10 14:31:27.153612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.672 [2024-12-10 14:31:27.153668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.672 [2024-12-10 14:31:27.153680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.672 [2024-12-10 14:31:27.153687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.672 [2024-12-10 14:31:27.153694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.672 [2024-12-10 14:31:27.153708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.672 qpair failed and we were unable to recover it. 00:29:26.672 [2024-12-10 14:31:27.163645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.672 [2024-12-10 14:31:27.163700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.673 [2024-12-10 14:31:27.163713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.673 [2024-12-10 14:31:27.163720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.673 [2024-12-10 14:31:27.163726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.673 [2024-12-10 14:31:27.163741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.673 qpair failed and we were unable to recover it. 00:29:26.673 [2024-12-10 14:31:27.173682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.673 [2024-12-10 14:31:27.173762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.673 [2024-12-10 14:31:27.173775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.673 [2024-12-10 14:31:27.173783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.673 [2024-12-10 14:31:27.173789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.673 [2024-12-10 14:31:27.173803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.673 qpair failed and we were unable to recover it. 00:29:26.673 [2024-12-10 14:31:27.183680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.673 [2024-12-10 14:31:27.183744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.673 [2024-12-10 14:31:27.183757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.673 [2024-12-10 14:31:27.183764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.673 [2024-12-10 14:31:27.183770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.673 [2024-12-10 14:31:27.183785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.673 qpair failed and we were unable to recover it. 00:29:26.673 [2024-12-10 14:31:27.193738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.673 [2024-12-10 14:31:27.193789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.673 [2024-12-10 14:31:27.193802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.673 [2024-12-10 14:31:27.193809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.673 [2024-12-10 14:31:27.193816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.673 [2024-12-10 14:31:27.193830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.673 qpair failed and we were unable to recover it. 00:29:26.673 [2024-12-10 14:31:27.203713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.673 [2024-12-10 14:31:27.203805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.673 [2024-12-10 14:31:27.203821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.673 [2024-12-10 14:31:27.203829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.673 [2024-12-10 14:31:27.203835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.673 [2024-12-10 14:31:27.203850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.673 qpair failed and we were unable to recover it. 00:29:26.673 [2024-12-10 14:31:27.213747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.673 [2024-12-10 14:31:27.213806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.673 [2024-12-10 14:31:27.213819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.673 [2024-12-10 14:31:27.213827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.673 [2024-12-10 14:31:27.213834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.673 [2024-12-10 14:31:27.213849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.673 qpair failed and we were unable to recover it. 00:29:26.673 [2024-12-10 14:31:27.223773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.673 [2024-12-10 14:31:27.223831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.673 [2024-12-10 14:31:27.223844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.673 [2024-12-10 14:31:27.223850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.673 [2024-12-10 14:31:27.223857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.673 [2024-12-10 14:31:27.223871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.673 qpair failed and we were unable to recover it. 00:29:26.673 [2024-12-10 14:31:27.233804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.673 [2024-12-10 14:31:27.233858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.673 [2024-12-10 14:31:27.233872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.673 [2024-12-10 14:31:27.233879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.673 [2024-12-10 14:31:27.233885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.673 [2024-12-10 14:31:27.233900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.673 qpair failed and we were unable to recover it. 00:29:26.673 [2024-12-10 14:31:27.243850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.673 [2024-12-10 14:31:27.243901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.673 [2024-12-10 14:31:27.243913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.673 [2024-12-10 14:31:27.243921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.673 [2024-12-10 14:31:27.243927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.673 [2024-12-10 14:31:27.243945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.673 qpair failed and we were unable to recover it. 00:29:26.673 [2024-12-10 14:31:27.253893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.673 [2024-12-10 14:31:27.253944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.673 [2024-12-10 14:31:27.253957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.673 [2024-12-10 14:31:27.253964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.673 [2024-12-10 14:31:27.253970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.673 [2024-12-10 14:31:27.253985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.673 qpair failed and we were unable to recover it. 00:29:26.673 [2024-12-10 14:31:27.263900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.673 [2024-12-10 14:31:27.263969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.673 [2024-12-10 14:31:27.263983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.673 [2024-12-10 14:31:27.263990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.673 [2024-12-10 14:31:27.263996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.673 [2024-12-10 14:31:27.264011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.673 qpair failed and we were unable to recover it. 00:29:26.673 [2024-12-10 14:31:27.273910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.673 [2024-12-10 14:31:27.273963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.673 [2024-12-10 14:31:27.273976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.673 [2024-12-10 14:31:27.273983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.673 [2024-12-10 14:31:27.273990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.673 [2024-12-10 14:31:27.274004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.673 qpair failed and we were unable to recover it. 00:29:26.673 [2024-12-10 14:31:27.283925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.673 [2024-12-10 14:31:27.283996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.673 [2024-12-10 14:31:27.284009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.673 [2024-12-10 14:31:27.284016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.673 [2024-12-10 14:31:27.284023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.673 [2024-12-10 14:31:27.284037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.673 qpair failed and we were unable to recover it. 00:29:26.673 [2024-12-10 14:31:27.293967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.673 [2024-12-10 14:31:27.294060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.673 [2024-12-10 14:31:27.294074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.673 [2024-12-10 14:31:27.294081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.674 [2024-12-10 14:31:27.294087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.674 [2024-12-10 14:31:27.294102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.674 qpair failed and we were unable to recover it. 00:29:26.674 [2024-12-10 14:31:27.303986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.674 [2024-12-10 14:31:27.304049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.674 [2024-12-10 14:31:27.304062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.674 [2024-12-10 14:31:27.304070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.674 [2024-12-10 14:31:27.304076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.674 [2024-12-10 14:31:27.304092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.674 qpair failed and we were unable to recover it. 00:29:26.674 [2024-12-10 14:31:27.314020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.674 [2024-12-10 14:31:27.314074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.674 [2024-12-10 14:31:27.314087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.674 [2024-12-10 14:31:27.314095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.674 [2024-12-10 14:31:27.314101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.674 [2024-12-10 14:31:27.314116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.674 qpair failed and we were unable to recover it. 00:29:26.674 [2024-12-10 14:31:27.324071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.674 [2024-12-10 14:31:27.324167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.674 [2024-12-10 14:31:27.324180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.674 [2024-12-10 14:31:27.324187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.674 [2024-12-10 14:31:27.324193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.674 [2024-12-10 14:31:27.324208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.674 qpair failed and we were unable to recover it. 00:29:26.674 [2024-12-10 14:31:27.334074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.674 [2024-12-10 14:31:27.334128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.674 [2024-12-10 14:31:27.334144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.674 [2024-12-10 14:31:27.334151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.674 [2024-12-10 14:31:27.334157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.674 [2024-12-10 14:31:27.334172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.674 qpair failed and we were unable to recover it. 00:29:26.674 [2024-12-10 14:31:27.344104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.674 [2024-12-10 14:31:27.344155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.674 [2024-12-10 14:31:27.344168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.674 [2024-12-10 14:31:27.344175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.674 [2024-12-10 14:31:27.344182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.674 [2024-12-10 14:31:27.344197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.674 qpair failed and we were unable to recover it. 00:29:26.674 [2024-12-10 14:31:27.354145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.674 [2024-12-10 14:31:27.354208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.674 [2024-12-10 14:31:27.354224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.674 [2024-12-10 14:31:27.354231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.674 [2024-12-10 14:31:27.354238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.674 [2024-12-10 14:31:27.354253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.674 qpair failed and we were unable to recover it. 00:29:26.674 [2024-12-10 14:31:27.364146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.674 [2024-12-10 14:31:27.364231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.674 [2024-12-10 14:31:27.364245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.674 [2024-12-10 14:31:27.364252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.674 [2024-12-10 14:31:27.364258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.674 [2024-12-10 14:31:27.364273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.674 qpair failed and we were unable to recover it. 00:29:26.674 [2024-12-10 14:31:27.374175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.674 [2024-12-10 14:31:27.374235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.674 [2024-12-10 14:31:27.374248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.674 [2024-12-10 14:31:27.374256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.674 [2024-12-10 14:31:27.374266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.674 [2024-12-10 14:31:27.374281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.674 qpair failed and we were unable to recover it. 00:29:26.674 [2024-12-10 14:31:27.384247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.674 [2024-12-10 14:31:27.384308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.674 [2024-12-10 14:31:27.384320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.674 [2024-12-10 14:31:27.384328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.674 [2024-12-10 14:31:27.384334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.674 [2024-12-10 14:31:27.384349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.674 qpair failed and we were unable to recover it. 00:29:26.674 [2024-12-10 14:31:27.394237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.674 [2024-12-10 14:31:27.394307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.674 [2024-12-10 14:31:27.394320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.674 [2024-12-10 14:31:27.394327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.674 [2024-12-10 14:31:27.394334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.674 [2024-12-10 14:31:27.394348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.674 qpair failed and we were unable to recover it. 00:29:26.674 [2024-12-10 14:31:27.404259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.674 [2024-12-10 14:31:27.404312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.674 [2024-12-10 14:31:27.404325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.674 [2024-12-10 14:31:27.404332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.674 [2024-12-10 14:31:27.404338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.674 [2024-12-10 14:31:27.404353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.674 qpair failed and we were unable to recover it. 00:29:26.932 [2024-12-10 14:31:27.414279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.932 [2024-12-10 14:31:27.414334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.932 [2024-12-10 14:31:27.414347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.932 [2024-12-10 14:31:27.414354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.933 [2024-12-10 14:31:27.414360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.933 [2024-12-10 14:31:27.414375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.933 qpair failed and we were unable to recover it. 00:29:26.933 [2024-12-10 14:31:27.424322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.933 [2024-12-10 14:31:27.424381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.933 [2024-12-10 14:31:27.424395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.933 [2024-12-10 14:31:27.424401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.933 [2024-12-10 14:31:27.424407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.933 [2024-12-10 14:31:27.424422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.933 qpair failed and we were unable to recover it. 00:29:26.933 [2024-12-10 14:31:27.434355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.933 [2024-12-10 14:31:27.434411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.933 [2024-12-10 14:31:27.434424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.933 [2024-12-10 14:31:27.434432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.933 [2024-12-10 14:31:27.434438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.933 [2024-12-10 14:31:27.434453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.933 qpair failed and we were unable to recover it. 00:29:26.933 [2024-12-10 14:31:27.444375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.933 [2024-12-10 14:31:27.444431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.933 [2024-12-10 14:31:27.444444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.933 [2024-12-10 14:31:27.444451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.933 [2024-12-10 14:31:27.444457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.933 [2024-12-10 14:31:27.444471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.933 qpair failed and we were unable to recover it. 00:29:26.933 [2024-12-10 14:31:27.454445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.933 [2024-12-10 14:31:27.454552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.933 [2024-12-10 14:31:27.454565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.933 [2024-12-10 14:31:27.454572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.933 [2024-12-10 14:31:27.454578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.933 [2024-12-10 14:31:27.454593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.933 qpair failed and we were unable to recover it. 00:29:26.933 [2024-12-10 14:31:27.464427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.933 [2024-12-10 14:31:27.464482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.933 [2024-12-10 14:31:27.464498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.933 [2024-12-10 14:31:27.464504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.933 [2024-12-10 14:31:27.464511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.933 [2024-12-10 14:31:27.464525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.933 qpair failed and we were unable to recover it. 00:29:26.933 [2024-12-10 14:31:27.474435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.933 [2024-12-10 14:31:27.474509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.933 [2024-12-10 14:31:27.474522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.933 [2024-12-10 14:31:27.474529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.933 [2024-12-10 14:31:27.474535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.933 [2024-12-10 14:31:27.474549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.933 qpair failed and we were unable to recover it. 00:29:26.933 [2024-12-10 14:31:27.484459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.933 [2024-12-10 14:31:27.484513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.933 [2024-12-10 14:31:27.484526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.933 [2024-12-10 14:31:27.484533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.933 [2024-12-10 14:31:27.484539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.933 [2024-12-10 14:31:27.484554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.933 qpair failed and we were unable to recover it. 00:29:26.933 [2024-12-10 14:31:27.494510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.933 [2024-12-10 14:31:27.494567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.933 [2024-12-10 14:31:27.494580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.933 [2024-12-10 14:31:27.494587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.933 [2024-12-10 14:31:27.494593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.933 [2024-12-10 14:31:27.494607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.933 qpair failed and we were unable to recover it. 00:29:26.933 [2024-12-10 14:31:27.504540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.933 [2024-12-10 14:31:27.504592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.933 [2024-12-10 14:31:27.504605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.933 [2024-12-10 14:31:27.504615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.933 [2024-12-10 14:31:27.504621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.933 [2024-12-10 14:31:27.504636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.933 qpair failed and we were unable to recover it. 00:29:26.933 [2024-12-10 14:31:27.514598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.933 [2024-12-10 14:31:27.514657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.933 [2024-12-10 14:31:27.514670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.933 [2024-12-10 14:31:27.514678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.933 [2024-12-10 14:31:27.514684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.933 [2024-12-10 14:31:27.514698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.933 qpair failed and we were unable to recover it. 00:29:26.933 [2024-12-10 14:31:27.524601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.933 [2024-12-10 14:31:27.524655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.933 [2024-12-10 14:31:27.524668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.933 [2024-12-10 14:31:27.524676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.933 [2024-12-10 14:31:27.524682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.933 [2024-12-10 14:31:27.524697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.933 qpair failed and we were unable to recover it. 00:29:26.933 [2024-12-10 14:31:27.534599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.933 [2024-12-10 14:31:27.534655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.933 [2024-12-10 14:31:27.534668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.933 [2024-12-10 14:31:27.534675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.933 [2024-12-10 14:31:27.534681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.933 [2024-12-10 14:31:27.534696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.933 qpair failed and we were unable to recover it. 00:29:26.933 [2024-12-10 14:31:27.544668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.933 [2024-12-10 14:31:27.544722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.933 [2024-12-10 14:31:27.544734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.933 [2024-12-10 14:31:27.544741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.933 [2024-12-10 14:31:27.544748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.934 [2024-12-10 14:31:27.544762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.934 qpair failed and we were unable to recover it. 00:29:26.934 [2024-12-10 14:31:27.554674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.934 [2024-12-10 14:31:27.554727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.934 [2024-12-10 14:31:27.554740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.934 [2024-12-10 14:31:27.554747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.934 [2024-12-10 14:31:27.554754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.934 [2024-12-10 14:31:27.554768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.934 qpair failed and we were unable to recover it. 00:29:26.934 [2024-12-10 14:31:27.564683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.934 [2024-12-10 14:31:27.564737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.934 [2024-12-10 14:31:27.564750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.934 [2024-12-10 14:31:27.564757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.934 [2024-12-10 14:31:27.564764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.934 [2024-12-10 14:31:27.564778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.934 qpair failed and we were unable to recover it. 00:29:26.934 [2024-12-10 14:31:27.574785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.934 [2024-12-10 14:31:27.574841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.934 [2024-12-10 14:31:27.574854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.934 [2024-12-10 14:31:27.574861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.934 [2024-12-10 14:31:27.574867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.934 [2024-12-10 14:31:27.574882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.934 qpair failed and we were unable to recover it. 00:29:26.934 [2024-12-10 14:31:27.584763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.934 [2024-12-10 14:31:27.584818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.934 [2024-12-10 14:31:27.584831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.934 [2024-12-10 14:31:27.584838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.934 [2024-12-10 14:31:27.584844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.934 [2024-12-10 14:31:27.584858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.934 qpair failed and we were unable to recover it. 00:29:26.934 [2024-12-10 14:31:27.594736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.934 [2024-12-10 14:31:27.594797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.934 [2024-12-10 14:31:27.594810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.934 [2024-12-10 14:31:27.594818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.934 [2024-12-10 14:31:27.594824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.934 [2024-12-10 14:31:27.594838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.934 qpair failed and we were unable to recover it. 00:29:26.934 [2024-12-10 14:31:27.604740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.934 [2024-12-10 14:31:27.604818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.934 [2024-12-10 14:31:27.604831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.934 [2024-12-10 14:31:27.604838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.934 [2024-12-10 14:31:27.604844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.934 [2024-12-10 14:31:27.604859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.934 qpair failed and we were unable to recover it. 00:29:26.934 [2024-12-10 14:31:27.614856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.934 [2024-12-10 14:31:27.614926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.934 [2024-12-10 14:31:27.614939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.934 [2024-12-10 14:31:27.614946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.934 [2024-12-10 14:31:27.614952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.934 [2024-12-10 14:31:27.614968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.934 qpair failed and we were unable to recover it. 00:29:26.934 [2024-12-10 14:31:27.624901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.934 [2024-12-10 14:31:27.624973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.934 [2024-12-10 14:31:27.624986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.934 [2024-12-10 14:31:27.624993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.934 [2024-12-10 14:31:27.625000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.934 [2024-12-10 14:31:27.625013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.934 qpair failed and we were unable to recover it. 00:29:26.934 [2024-12-10 14:31:27.634939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.934 [2024-12-10 14:31:27.634995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.934 [2024-12-10 14:31:27.635008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.934 [2024-12-10 14:31:27.635018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.934 [2024-12-10 14:31:27.635024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.934 [2024-12-10 14:31:27.635039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.934 qpair failed and we were unable to recover it. 00:29:26.934 [2024-12-10 14:31:27.644943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.934 [2024-12-10 14:31:27.645025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.934 [2024-12-10 14:31:27.645039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.934 [2024-12-10 14:31:27.645046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.934 [2024-12-10 14:31:27.645052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.934 [2024-12-10 14:31:27.645067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.934 qpair failed and we were unable to recover it. 00:29:26.934 [2024-12-10 14:31:27.654976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.934 [2024-12-10 14:31:27.655065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.934 [2024-12-10 14:31:27.655079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.934 [2024-12-10 14:31:27.655086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.934 [2024-12-10 14:31:27.655093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.934 [2024-12-10 14:31:27.655107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.934 qpair failed and we were unable to recover it. 00:29:26.934 [2024-12-10 14:31:27.665050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.934 [2024-12-10 14:31:27.665106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.934 [2024-12-10 14:31:27.665119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.934 [2024-12-10 14:31:27.665126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.934 [2024-12-10 14:31:27.665132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:26.934 [2024-12-10 14:31:27.665147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.934 qpair failed and we were unable to recover it. 00:29:27.192 [2024-12-10 14:31:27.675049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.192 [2024-12-10 14:31:27.675106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.192 [2024-12-10 14:31:27.675118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.192 [2024-12-10 14:31:27.675125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.192 [2024-12-10 14:31:27.675132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6368000b90 00:29:27.192 [2024-12-10 14:31:27.675149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.192 qpair failed and we were unable to recover it. 00:29:27.192 [2024-12-10 14:31:27.675259] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:27.192 A controller has encountered a failure and is being reset. 00:29:27.192 Controller properly reset. 00:29:27.192 Initializing NVMe Controllers 00:29:27.192 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:27.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:27.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:27.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:27.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:27.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:27.192 Initialization complete. Launching workers. 00:29:27.192 Starting thread on core 1 00:29:27.192 Starting thread on core 2 00:29:27.192 Starting thread on core 3 00:29:27.192 Starting thread on core 0 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:27.192 00:29:27.192 real 0m10.876s 00:29:27.192 user 0m19.041s 00:29:27.192 sys 0m4.669s 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.192 ************************************ 00:29:27.192 END TEST nvmf_target_disconnect_tc2 00:29:27.192 ************************************ 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:27.192 rmmod nvme_tcp 00:29:27.192 rmmod nvme_fabrics 00:29:27.192 rmmod nvme_keyring 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1816419 ']' 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1816419 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1816419 ']' 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1816419 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.192 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1816419 00:29:27.451 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:29:27.451 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:29:27.451 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1816419' 00:29:27.451 killing process with pid 1816419 00:29:27.451 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1816419 00:29:27.451 14:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1816419 00:29:27.451 14:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:27.451 14:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:27.451 14:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:27.451 14:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:27.451 14:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:27.451 14:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:27.451 14:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:27.451 14:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:27.451 14:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:27.451 14:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.451 14:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.452 14:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.987 14:31:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:29.987 00:29:29.987 real 0m20.500s 00:29:29.987 user 0m47.307s 00:29:29.987 sys 0m10.188s 00:29:29.987 14:31:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.987 14:31:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:29.987 ************************************ 00:29:29.987 END TEST nvmf_target_disconnect 00:29:29.987 ************************************ 00:29:29.987 14:31:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:29.987 00:29:29.987 real 6m8.321s 00:29:29.987 user 10m40.727s 00:29:29.987 sys 2m9.195s 00:29:29.987 14:31:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.987 14:31:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.987 ************************************ 00:29:29.987 END TEST nvmf_host 00:29:29.987 ************************************ 00:29:29.987 14:31:30 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:29.987 14:31:30 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:29.987 14:31:30 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:29.987 14:31:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:29.987 14:31:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.987 14:31:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:29.987 ************************************ 00:29:29.987 START TEST nvmf_target_core_interrupt_mode 00:29:29.987 ************************************ 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:29.987 * Looking for test storage... 00:29:29.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:29.987 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:29.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.988 --rc genhtml_branch_coverage=1 00:29:29.988 --rc genhtml_function_coverage=1 00:29:29.988 --rc genhtml_legend=1 00:29:29.988 --rc geninfo_all_blocks=1 00:29:29.988 --rc geninfo_unexecuted_blocks=1 00:29:29.988 00:29:29.988 ' 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:29.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.988 --rc genhtml_branch_coverage=1 00:29:29.988 --rc genhtml_function_coverage=1 00:29:29.988 --rc genhtml_legend=1 00:29:29.988 --rc geninfo_all_blocks=1 00:29:29.988 --rc geninfo_unexecuted_blocks=1 00:29:29.988 00:29:29.988 ' 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:29.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.988 --rc genhtml_branch_coverage=1 00:29:29.988 --rc genhtml_function_coverage=1 00:29:29.988 --rc genhtml_legend=1 00:29:29.988 --rc geninfo_all_blocks=1 00:29:29.988 --rc geninfo_unexecuted_blocks=1 00:29:29.988 00:29:29.988 ' 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:29.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.988 --rc genhtml_branch_coverage=1 00:29:29.988 --rc genhtml_function_coverage=1 00:29:29.988 --rc genhtml_legend=1 00:29:29.988 --rc geninfo_all_blocks=1 00:29:29.988 --rc geninfo_unexecuted_blocks=1 00:29:29.988 00:29:29.988 ' 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:29.988 ************************************ 00:29:29.988 START TEST nvmf_abort 00:29:29.988 ************************************ 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:29.988 * Looking for test storage... 00:29:29.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:29:29.988 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:30.248 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:30.248 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:30.248 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:30.248 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:30.248 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:30.248 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:30.248 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:30.248 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:30.248 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:30.248 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:30.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.249 --rc genhtml_branch_coverage=1 00:29:30.249 --rc genhtml_function_coverage=1 00:29:30.249 --rc genhtml_legend=1 00:29:30.249 --rc geninfo_all_blocks=1 00:29:30.249 --rc geninfo_unexecuted_blocks=1 00:29:30.249 00:29:30.249 ' 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:30.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.249 --rc genhtml_branch_coverage=1 00:29:30.249 --rc genhtml_function_coverage=1 00:29:30.249 --rc genhtml_legend=1 00:29:30.249 --rc geninfo_all_blocks=1 00:29:30.249 --rc geninfo_unexecuted_blocks=1 00:29:30.249 00:29:30.249 ' 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:30.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.249 --rc genhtml_branch_coverage=1 00:29:30.249 --rc genhtml_function_coverage=1 00:29:30.249 --rc genhtml_legend=1 00:29:30.249 --rc geninfo_all_blocks=1 00:29:30.249 --rc geninfo_unexecuted_blocks=1 00:29:30.249 00:29:30.249 ' 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:30.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.249 --rc genhtml_branch_coverage=1 00:29:30.249 --rc genhtml_function_coverage=1 00:29:30.249 --rc genhtml_legend=1 00:29:30.249 --rc geninfo_all_blocks=1 00:29:30.249 --rc geninfo_unexecuted_blocks=1 00:29:30.249 00:29:30.249 ' 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:30.249 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:30.250 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:30.250 14:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:36.816 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:36.817 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:36.817 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:36.817 Found net devices under 0000:af:00.0: cvl_0_0 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:36.817 Found net devices under 0000:af:00.1: cvl_0_1 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:36.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:29:36.817 00:29:36.817 --- 10.0.0.2 ping statistics --- 00:29:36.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.817 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:29:36.817 00:29:36.817 --- 10.0.0.1 ping statistics --- 00:29:36.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.817 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1821486 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1821486 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1821486 ']' 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:36.817 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.818 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:36.818 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:36.818 [2024-12-10 14:31:37.546656] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:36.818 [2024-12-10 14:31:37.547597] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:29:36.818 [2024-12-10 14:31:37.547634] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.077 [2024-12-10 14:31:37.634019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:37.077 [2024-12-10 14:31:37.674049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.077 [2024-12-10 14:31:37.674083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.077 [2024-12-10 14:31:37.674090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.077 [2024-12-10 14:31:37.674096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.077 [2024-12-10 14:31:37.674101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.077 [2024-12-10 14:31:37.675503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.077 [2024-12-10 14:31:37.675612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.077 [2024-12-10 14:31:37.675613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:37.077 [2024-12-10 14:31:37.744122] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:37.077 [2024-12-10 14:31:37.744939] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:37.077 [2024-12-10 14:31:37.745089] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:37.077 [2024-12-10 14:31:37.745180] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:37.077 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:37.077 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:29:37.077 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:37.077 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:37.077 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.077 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.077 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:37.077 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.077 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.077 [2024-12-10 14:31:37.812422] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.336 Malloc0 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.336 Delay0 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.336 [2024-12-10 14:31:37.896341] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.336 14:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:37.336 [2024-12-10 14:31:38.025987] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:39.870 Initializing NVMe Controllers 00:29:39.870 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:39.870 controller IO queue size 128 less than required 00:29:39.870 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:39.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:39.870 Initialization complete. Launching workers. 00:29:39.870 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37711 00:29:39.870 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37768, failed to submit 66 00:29:39.870 success 37711, unsuccessful 57, failed 0 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:39.870 rmmod nvme_tcp 00:29:39.870 rmmod nvme_fabrics 00:29:39.870 rmmod nvme_keyring 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1821486 ']' 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1821486 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1821486 ']' 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1821486 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1821486 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1821486' 00:29:39.870 killing process with pid 1821486 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1821486 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1821486 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.870 14:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:42.404 00:29:42.404 real 0m11.981s 00:29:42.404 user 0m10.814s 00:29:42.404 sys 0m6.295s 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.404 ************************************ 00:29:42.404 END TEST nvmf_abort 00:29:42.404 ************************************ 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:42.404 ************************************ 00:29:42.404 START TEST nvmf_ns_hotplug_stress 00:29:42.404 ************************************ 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:42.404 * Looking for test storage... 00:29:42.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:42.404 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:42.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.405 --rc genhtml_branch_coverage=1 00:29:42.405 --rc genhtml_function_coverage=1 00:29:42.405 --rc genhtml_legend=1 00:29:42.405 --rc geninfo_all_blocks=1 00:29:42.405 --rc geninfo_unexecuted_blocks=1 00:29:42.405 00:29:42.405 ' 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:42.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.405 --rc genhtml_branch_coverage=1 00:29:42.405 --rc genhtml_function_coverage=1 00:29:42.405 --rc genhtml_legend=1 00:29:42.405 --rc geninfo_all_blocks=1 00:29:42.405 --rc geninfo_unexecuted_blocks=1 00:29:42.405 00:29:42.405 ' 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:42.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.405 --rc genhtml_branch_coverage=1 00:29:42.405 --rc genhtml_function_coverage=1 00:29:42.405 --rc genhtml_legend=1 00:29:42.405 --rc geninfo_all_blocks=1 00:29:42.405 --rc geninfo_unexecuted_blocks=1 00:29:42.405 00:29:42.405 ' 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:42.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.405 --rc genhtml_branch_coverage=1 00:29:42.405 --rc genhtml_function_coverage=1 00:29:42.405 --rc genhtml_legend=1 00:29:42.405 --rc geninfo_all_blocks=1 00:29:42.405 --rc geninfo_unexecuted_blocks=1 00:29:42.405 00:29:42.405 ' 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:42.405 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:42.406 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.406 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.406 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.406 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:42.406 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:42.406 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:42.406 14:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:48.972 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:48.972 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:48.972 Found net devices under 0000:af:00.0: cvl_0_0 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.972 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:48.973 Found net devices under 0000:af:00.1: cvl_0_1 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:48.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:29:48.973 00:29:48.973 --- 10.0.0.2 ping statistics --- 00:29:48.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.973 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:29:48.973 00:29:48.973 --- 10.0.0.1 ping statistics --- 00:29:48.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.973 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1825743 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1825743 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1825743 ']' 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.973 14:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:49.233 [2024-12-10 14:31:49.750434] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:49.233 [2024-12-10 14:31:49.751333] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:29:49.233 [2024-12-10 14:31:49.751367] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.233 [2024-12-10 14:31:49.836987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:49.233 [2024-12-10 14:31:49.876497] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:49.233 [2024-12-10 14:31:49.876533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:49.233 [2024-12-10 14:31:49.876540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:49.233 [2024-12-10 14:31:49.876546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:49.233 [2024-12-10 14:31:49.876551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:49.233 [2024-12-10 14:31:49.877882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:49.233 [2024-12-10 14:31:49.877988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.233 [2024-12-10 14:31:49.877989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:49.233 [2024-12-10 14:31:49.944997] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:49.233 [2024-12-10 14:31:49.945737] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:49.233 [2024-12-10 14:31:49.945836] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:49.233 [2024-12-10 14:31:49.945997] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:50.168 14:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:50.168 14:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:29:50.168 14:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:50.168 14:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:50.168 14:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:50.168 14:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:50.168 14:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:50.168 14:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:50.168 [2024-12-10 14:31:50.814774] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.168 14:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:50.427 14:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:50.686 [2024-12-10 14:31:51.199148] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.686 14:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:50.686 14:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:50.945 Malloc0 00:29:50.945 14:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:51.203 Delay0 00:29:51.203 14:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.462 14:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:51.462 NULL1 00:29:51.462 14:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:51.721 14:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1826222 00:29:51.721 14:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:51.721 14:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:29:51.721 14:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.979 14:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.238 14:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:52.238 14:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:52.496 true 00:29:52.496 14:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:29:52.496 14:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.496 14:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.755 14:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:52.755 14:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:53.013 true 00:29:53.013 14:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:29:53.013 14:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.272 14:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.531 14:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:53.531 14:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:53.531 true 00:29:53.789 14:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:29:53.789 14:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.789 14:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.048 14:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:54.048 14:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:54.307 true 00:29:54.307 14:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:29:54.307 14:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.570 14:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.830 14:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:54.830 14:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:54.830 true 00:29:55.088 14:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:29:55.088 14:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.088 14:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.347 14:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:55.347 14:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:55.605 true 00:29:55.605 14:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:29:55.605 14:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.864 14:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.123 14:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:56.123 14:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:56.123 true 00:29:56.381 14:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:29:56.381 14:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.640 14:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.640 14:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:56.640 14:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:56.899 true 00:29:56.899 14:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:29:56.899 14:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.157 14:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.416 14:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:57.416 14:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:57.674 true 00:29:57.674 14:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:29:57.674 14:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.674 14:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.933 14:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:57.933 14:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:58.192 true 00:29:58.192 14:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:29:58.192 14:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.451 14:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.709 14:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:58.709 14:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:58.968 true 00:29:58.968 14:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:29:58.968 14:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.968 14:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.227 14:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:59.227 14:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:59.485 true 00:29:59.485 14:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:29:59.485 14:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.743 14:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.001 14:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:00.001 14:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:00.260 true 00:30:00.260 14:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:00.260 14:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.260 14:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.519 14:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:00.519 14:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:00.777 true 00:30:00.777 14:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:00.777 14:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.036 14:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.295 14:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:01.295 14:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:01.553 true 00:30:01.553 14:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:01.553 14:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.811 14:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.811 14:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:01.811 14:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:02.069 true 00:30:02.069 14:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:02.069 14:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.327 14:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.585 14:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:02.585 14:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:02.844 true 00:30:02.844 14:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:02.844 14:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.102 14:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.361 14:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:03.361 14:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:03.361 true 00:30:03.361 14:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:03.361 14:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.619 14:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.878 14:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:03.878 14:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:04.136 true 00:30:04.136 14:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:04.136 14:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.395 14:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.395 14:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:04.395 14:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:04.653 true 00:30:04.653 14:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:04.653 14:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.911 14:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.168 14:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:05.168 14:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:05.426 true 00:30:05.426 14:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:05.426 14:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.684 14:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.684 14:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:05.684 14:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:05.942 true 00:30:05.942 14:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:05.942 14:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.200 14:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.459 14:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:06.459 14:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:06.717 true 00:30:06.717 14:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:06.717 14:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.717 14:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.975 14:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:06.975 14:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:07.234 true 00:30:07.234 14:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:07.234 14:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.492 14:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.751 14:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:07.751 14:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:08.010 true 00:30:08.010 14:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:08.010 14:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.010 14:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.269 14:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:08.269 14:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:08.527 true 00:30:08.527 14:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:08.527 14:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.786 14:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.045 14:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:09.045 14:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:09.045 true 00:30:09.303 14:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:09.304 14:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.304 14:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.562 14:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:09.562 14:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:09.839 true 00:30:09.839 14:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:09.839 14:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.150 14:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.429 14:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:10.429 14:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:10.429 true 00:30:10.429 14:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:10.429 14:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.712 14:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.970 14:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:10.970 14:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:11.228 true 00:30:11.228 14:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:11.228 14:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.228 14:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.487 14:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:11.487 14:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:11.745 true 00:30:11.745 14:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:11.745 14:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.004 14:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.262 14:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:12.262 14:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:12.521 true 00:30:12.521 14:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:12.521 14:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.521 14:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.779 14:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:12.779 14:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:13.037 true 00:30:13.037 14:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:13.037 14:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.296 14:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.554 14:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:13.554 14:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:13.812 true 00:30:13.812 14:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:13.812 14:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.812 14:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.070 14:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:14.070 14:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:14.328 true 00:30:14.328 14:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:14.328 14:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.586 14:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.845 14:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:30:14.845 14:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:30:15.103 true 00:30:15.103 14:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:15.103 14:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.103 14:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.361 14:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:30:15.361 14:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:30:15.619 true 00:30:15.619 14:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:15.619 14:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.877 14:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.136 14:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:30:16.136 14:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:30:16.394 true 00:30:16.394 14:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:16.394 14:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.394 14:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.652 14:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:30:16.652 14:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:30:16.911 true 00:30:16.911 14:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:16.911 14:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.169 14:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.428 14:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:30:17.428 14:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:30:17.686 true 00:30:17.686 14:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:17.686 14:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.944 14:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.944 14:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:30:17.944 14:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:30:18.203 true 00:30:18.203 14:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:18.203 14:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.462 14:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.720 14:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:30:18.720 14:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:30:18.977 true 00:30:18.977 14:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:18.977 14:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.235 14:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.235 14:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:30:19.235 14:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:30:19.493 true 00:30:19.493 14:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:19.493 14:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.751 14:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.009 14:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:30:20.009 14:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:30:20.268 true 00:30:20.268 14:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:20.268 14:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.526 14:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.785 14:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:30:20.785 14:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:30:20.785 true 00:30:20.785 14:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:20.785 14:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.043 14:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.302 14:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:30:21.302 14:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:30:21.561 true 00:30:21.561 14:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:21.561 14:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.819 14:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.077 14:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:30:22.077 14:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:30:22.077 Initializing NVMe Controllers 00:30:22.077 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:22.077 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:30:22.077 Controller IO queue size 128, less than required. 00:30:22.077 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:22.077 WARNING: Some requested NVMe devices were skipped 00:30:22.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:22.077 Initialization complete. Launching workers. 00:30:22.077 ======================================================== 00:30:22.077 Latency(us) 00:30:22.077 Device Information : IOPS MiB/s Average min max 00:30:22.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 28158.37 13.75 4545.63 1562.53 8790.85 00:30:22.077 ======================================================== 00:30:22.077 Total : 28158.37 13.75 4545.63 1562.53 8790.85 00:30:22.077 00:30:22.077 true 00:30:22.337 14:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826222 00:30:22.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1826222) - No such process 00:30:22.337 14:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1826222 00:30:22.337 14:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.337 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:22.600 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:22.600 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:22.600 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:22.600 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:22.600 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:22.859 null0 00:30:22.859 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:22.859 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:22.859 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:22.859 null1 00:30:22.859 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:23.118 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:23.118 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:23.118 null2 00:30:23.118 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:23.118 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:23.118 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:23.377 null3 00:30:23.377 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:23.377 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:23.377 14:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:23.636 null4 00:30:23.636 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:23.636 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:23.636 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:23.636 null5 00:30:23.636 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:23.636 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:23.636 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:23.894 null6 00:30:23.894 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:23.894 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:23.894 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:24.153 null7 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:24.153 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1831540 1831541 1831543 1831546 1831547 1831549 1831551 1831553 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.154 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:24.413 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.413 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:24.413 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:24.413 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:24.413 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:24.413 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:24.413 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:24.413 14:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.413 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:24.670 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:24.670 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.670 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:24.670 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:24.670 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.670 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:24.670 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:24.670 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:24.670 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:24.670 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:24.670 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.928 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:25.187 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:25.187 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:25.187 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:25.187 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:25.187 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:25.187 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.187 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:25.187 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.446 14:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:25.446 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:25.446 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:25.446 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:25.446 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:25.446 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:25.446 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:25.446 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.706 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:25.965 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:25.965 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:25.965 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.965 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:25.965 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:25.965 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:25.965 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:25.965 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:26.224 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.224 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.224 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:26.224 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.224 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.224 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:26.224 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.224 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.224 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:26.224 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.224 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.224 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:26.224 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.224 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.224 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:26.224 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.225 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.225 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:26.225 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.225 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.225 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:26.225 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.225 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.225 14:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:26.483 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:26.483 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.483 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:26.483 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:26.483 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:26.483 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:26.483 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:26.483 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:26.483 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.483 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.483 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:26.483 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.483 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.483 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:26.742 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.001 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:27.260 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:27.260 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:27.260 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:27.260 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:27.260 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.260 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:27.260 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:27.260 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:27.518 14:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.518 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.518 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:27.518 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.518 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.518 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:27.519 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.777 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:27.778 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.778 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.778 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:27.778 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.778 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.778 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:28.036 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:28.036 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:28.036 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:28.036 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:28.036 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:28.036 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:28.036 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.036 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:28.295 rmmod nvme_tcp 00:30:28.295 rmmod nvme_fabrics 00:30:28.295 rmmod nvme_keyring 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1825743 ']' 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1825743 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1825743 ']' 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1825743 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:28.295 14:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1825743 00:30:28.295 14:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:28.295 14:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:28.295 14:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1825743' 00:30:28.295 killing process with pid 1825743 00:30:28.295 14:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1825743 00:30:28.295 14:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1825743 00:30:28.554 14:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:28.554 14:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:28.554 14:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:28.554 14:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:28.554 14:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:28.554 14:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:28.554 14:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:28.554 14:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:28.554 14:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:28.554 14:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.554 14:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.554 14:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:31.097 00:30:31.097 real 0m48.636s 00:30:31.097 user 3m2.107s 00:30:31.097 sys 0m22.406s 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:31.097 ************************************ 00:30:31.097 END TEST nvmf_ns_hotplug_stress 00:30:31.097 ************************************ 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:31.097 ************************************ 00:30:31.097 START TEST nvmf_delete_subsystem 00:30:31.097 ************************************ 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:31.097 * Looking for test storage... 00:30:31.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:31.097 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:31.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.098 --rc genhtml_branch_coverage=1 00:30:31.098 --rc genhtml_function_coverage=1 00:30:31.098 --rc genhtml_legend=1 00:30:31.098 --rc geninfo_all_blocks=1 00:30:31.098 --rc geninfo_unexecuted_blocks=1 00:30:31.098 00:30:31.098 ' 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:31.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.098 --rc genhtml_branch_coverage=1 00:30:31.098 --rc genhtml_function_coverage=1 00:30:31.098 --rc genhtml_legend=1 00:30:31.098 --rc geninfo_all_blocks=1 00:30:31.098 --rc geninfo_unexecuted_blocks=1 00:30:31.098 00:30:31.098 ' 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:31.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.098 --rc genhtml_branch_coverage=1 00:30:31.098 --rc genhtml_function_coverage=1 00:30:31.098 --rc genhtml_legend=1 00:30:31.098 --rc geninfo_all_blocks=1 00:30:31.098 --rc geninfo_unexecuted_blocks=1 00:30:31.098 00:30:31.098 ' 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:31.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.098 --rc genhtml_branch_coverage=1 00:30:31.098 --rc genhtml_function_coverage=1 00:30:31.098 --rc genhtml_legend=1 00:30:31.098 --rc geninfo_all_blocks=1 00:30:31.098 --rc geninfo_unexecuted_blocks=1 00:30:31.098 00:30:31.098 ' 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:31.098 14:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:37.671 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:37.671 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:37.671 Found net devices under 0000:af:00.0: cvl_0_0 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:37.671 Found net devices under 0000:af:00.1: cvl_0_1 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:37.671 14:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:37.671 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:37.671 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:37.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:37.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:30:37.672 00:30:37.672 --- 10.0.0.2 ping statistics --- 00:30:37.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.672 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:37.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:37.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:30:37.672 00:30:37.672 --- 10.0.0.1 ping statistics --- 00:30:37.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.672 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1836246 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1836246 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1836246 ']' 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:37.672 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:37.672 [2024-12-10 14:32:38.320314] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:37.672 [2024-12-10 14:32:38.321215] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:30:37.672 [2024-12-10 14:32:38.321257] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:37.672 [2024-12-10 14:32:38.402563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:37.931 [2024-12-10 14:32:38.442303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:37.931 [2024-12-10 14:32:38.442336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:37.931 [2024-12-10 14:32:38.442343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:37.931 [2024-12-10 14:32:38.442350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:37.931 [2024-12-10 14:32:38.442356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:37.931 [2024-12-10 14:32:38.443486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.931 [2024-12-10 14:32:38.443488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.931 [2024-12-10 14:32:38.510510] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:37.931 [2024-12-10 14:32:38.511026] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:37.931 [2024-12-10 14:32:38.511186] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:37.931 [2024-12-10 14:32:38.576280] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:37.931 [2024-12-10 14:32:38.604579] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:37.931 NULL1 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:37.931 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.932 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:37.932 Delay0 00:30:37.932 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.932 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.932 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.932 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:37.932 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.932 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1836401 00:30:37.932 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:37.932 14:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:38.191 [2024-12-10 14:32:38.716628] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:40.094 14:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:40.094 14:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.094 14:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:40.094 Read completed with error (sct=0, sc=8) 00:30:40.094 Read completed with error (sct=0, sc=8) 00:30:40.094 Read completed with error (sct=0, sc=8) 00:30:40.094 Write completed with error (sct=0, sc=8) 00:30:40.094 starting I/O failed: -6 00:30:40.094 Read completed with error (sct=0, sc=8) 00:30:40.094 Write completed with error (sct=0, sc=8) 00:30:40.094 Write completed with error (sct=0, sc=8) 00:30:40.094 Read completed with error (sct=0, sc=8) 00:30:40.094 starting I/O failed: -6 00:30:40.094 Read completed with error (sct=0, sc=8) 00:30:40.094 Read completed with error (sct=0, sc=8) 00:30:40.094 Read completed with error (sct=0, sc=8) 00:30:40.094 Read completed with error (sct=0, sc=8) 00:30:40.094 starting I/O failed: -6 00:30:40.094 Read completed with error (sct=0, sc=8) 00:30:40.094 Read completed with error (sct=0, sc=8) 00:30:40.094 Write completed with error (sct=0, sc=8) 00:30:40.094 Write completed with error (sct=0, sc=8) 00:30:40.094 starting I/O failed: -6 00:30:40.094 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 [2024-12-10 14:32:40.798911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c63b40 is same with the state(6) to be set 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 starting I/O failed: -6 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 [2024-12-10 14:32:40.802706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fede0000c40 is same with the state(6) to be set 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Write completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:40.095 Read completed with error (sct=0, sc=8) 00:30:41.032 [2024-12-10 14:32:41.771198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c649b0 is same with the state(6) to be set 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Write completed with error (sct=0, sc=8) 00:30:41.292 Write completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Write completed with error (sct=0, sc=8) 00:30:41.292 Write completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Write completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Write completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Write completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 [2024-12-10 14:32:41.802141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c632c0 is same with the state(6) to be set 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Write completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Write completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Write completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Write completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Write completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Write completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 [2024-12-10 14:32:41.802502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c63960 is same with the state(6) to be set 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Read completed with error (sct=0, sc=8) 00:30:41.292 Write completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Write completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Write completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 [2024-12-10 14:32:41.804235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fede000d7c0 is same with the state(6) to be set 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Write completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Write completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Read completed with error (sct=0, sc=8) 00:30:41.293 Write completed with error (sct=0, sc=8) 00:30:41.293 [2024-12-10 14:32:41.804830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fede000d020 is same with the state(6) to be set 00:30:41.293 Initializing NVMe Controllers 00:30:41.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:41.293 Controller IO queue size 128, less than required. 00:30:41.293 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:41.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:41.293 Initialization complete. Launching workers. 00:30:41.293 ======================================================== 00:30:41.293 Latency(us) 00:30:41.293 Device Information : IOPS MiB/s Average min max 00:30:41.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.14 0.09 879543.03 313.04 1006584.12 00:30:41.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.25 0.07 936622.49 247.07 1043091.25 00:30:41.293 ======================================================== 00:30:41.293 Total : 330.39 0.16 906019.65 247.07 1043091.25 00:30:41.293 00:30:41.293 [2024-12-10 14:32:41.805446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c649b0 (9): Bad file descriptor 00:30:41.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:41.293 14:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.293 14:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:41.293 14:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1836401 00:30:41.293 14:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:41.861 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1836401 00:30:41.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1836401) - No such process 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1836401 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1836401 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1836401 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:41.862 [2024-12-10 14:32:42.340488] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1836866 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1836866 00:30:41.862 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:41.862 [2024-12-10 14:32:42.425193] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:42.121 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:42.121 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1836866 00:30:42.121 14:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:42.689 14:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:42.689 14:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1836866 00:30:42.689 14:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:43.257 14:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:43.257 14:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1836866 00:30:43.257 14:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:43.824 14:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:43.824 14:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1836866 00:30:43.824 14:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:44.392 14:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:44.392 14:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1836866 00:30:44.392 14:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:44.650 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:44.650 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1836866 00:30:44.650 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:45.216 Initializing NVMe Controllers 00:30:45.216 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:45.216 Controller IO queue size 128, less than required. 00:30:45.216 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:45.216 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:45.216 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:45.216 Initialization complete. Launching workers. 00:30:45.216 ======================================================== 00:30:45.216 Latency(us) 00:30:45.216 Device Information : IOPS MiB/s Average min max 00:30:45.216 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002819.03 1000163.61 1042894.33 00:30:45.216 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005308.38 1000483.84 1042872.20 00:30:45.216 ======================================================== 00:30:45.216 Total : 256.00 0.12 1004063.70 1000163.61 1042894.33 00:30:45.216 00:30:45.216 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:45.216 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1836866 00:30:45.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1836866) - No such process 00:30:45.216 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1836866 00:30:45.217 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:45.217 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:45.217 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:45.217 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:45.217 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:45.217 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:45.217 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:45.217 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:45.217 rmmod nvme_tcp 00:30:45.217 rmmod nvme_fabrics 00:30:45.217 rmmod nvme_keyring 00:30:45.217 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:45.217 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:45.217 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:45.217 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1836246 ']' 00:30:45.217 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1836246 00:30:45.217 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1836246 ']' 00:30:45.217 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1836246 00:30:45.217 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:30:45.476 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:45.476 14:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1836246 00:30:45.476 14:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:45.476 14:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:45.476 14:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1836246' 00:30:45.476 killing process with pid 1836246 00:30:45.476 14:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1836246 00:30:45.476 14:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1836246 00:30:45.476 14:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:45.476 14:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:45.476 14:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:45.476 14:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:45.476 14:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:30:45.476 14:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:45.476 14:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:30:45.476 14:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:45.476 14:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:45.476 14:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.476 14:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.476 14:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:48.011 00:30:48.011 real 0m16.898s 00:30:48.011 user 0m26.248s 00:30:48.011 sys 0m6.756s 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:48.011 ************************************ 00:30:48.011 END TEST nvmf_delete_subsystem 00:30:48.011 ************************************ 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:48.011 ************************************ 00:30:48.011 START TEST nvmf_host_management 00:30:48.011 ************************************ 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:48.011 * Looking for test storage... 00:30:48.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:48.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.011 --rc genhtml_branch_coverage=1 00:30:48.011 --rc genhtml_function_coverage=1 00:30:48.011 --rc genhtml_legend=1 00:30:48.011 --rc geninfo_all_blocks=1 00:30:48.011 --rc geninfo_unexecuted_blocks=1 00:30:48.011 00:30:48.011 ' 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:48.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.011 --rc genhtml_branch_coverage=1 00:30:48.011 --rc genhtml_function_coverage=1 00:30:48.011 --rc genhtml_legend=1 00:30:48.011 --rc geninfo_all_blocks=1 00:30:48.011 --rc geninfo_unexecuted_blocks=1 00:30:48.011 00:30:48.011 ' 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:48.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.011 --rc genhtml_branch_coverage=1 00:30:48.011 --rc genhtml_function_coverage=1 00:30:48.011 --rc genhtml_legend=1 00:30:48.011 --rc geninfo_all_blocks=1 00:30:48.011 --rc geninfo_unexecuted_blocks=1 00:30:48.011 00:30:48.011 ' 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:48.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.011 --rc genhtml_branch_coverage=1 00:30:48.011 --rc genhtml_function_coverage=1 00:30:48.011 --rc genhtml_legend=1 00:30:48.011 --rc geninfo_all_blocks=1 00:30:48.011 --rc geninfo_unexecuted_blocks=1 00:30:48.011 00:30:48.011 ' 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.011 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:48.012 14:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:54.584 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:54.584 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:54.584 Found net devices under 0000:af:00.0: cvl_0_0 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:54.584 Found net devices under 0000:af:00.1: cvl_0_1 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:54.584 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:54.585 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:54.585 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:54.585 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:54.585 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:54.585 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:54.585 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:54.585 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:54.585 14:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:54.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:54.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:30:54.585 00:30:54.585 --- 10.0.0.2 ping statistics --- 00:30:54.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.585 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:54.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:54.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:30:54.585 00:30:54.585 --- 10.0.0.1 ping statistics --- 00:30:54.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.585 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1841330 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1841330 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1841330 ']' 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:54.585 14:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:54.585 [2024-12-10 14:32:55.291287] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:54.585 [2024-12-10 14:32:55.292171] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:30:54.585 [2024-12-10 14:32:55.292205] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:54.844 [2024-12-10 14:32:55.376924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:54.844 [2024-12-10 14:32:55.418123] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:54.844 [2024-12-10 14:32:55.418161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:54.844 [2024-12-10 14:32:55.418168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:54.844 [2024-12-10 14:32:55.418174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:54.844 [2024-12-10 14:32:55.418180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:54.844 [2024-12-10 14:32:55.419640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:54.844 [2024-12-10 14:32:55.419750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:54.844 [2024-12-10 14:32:55.419855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.844 [2024-12-10 14:32:55.419856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:54.844 [2024-12-10 14:32:55.487029] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:54.844 [2024-12-10 14:32:55.487489] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:54.844 [2024-12-10 14:32:55.487866] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:54.844 [2024-12-10 14:32:55.488120] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:54.844 [2024-12-10 14:32:55.488162] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:55.412 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:55.412 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:55.412 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:55.412 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:55.412 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:55.671 [2024-12-10 14:32:56.160598] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:55.671 Malloc0 00:30:55.671 [2024-12-10 14:32:56.244760] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1841591 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1841591 /var/tmp/bdevperf.sock 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1841591 ']' 00:30:55.671 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:55.672 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:55.672 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:55.672 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:55.672 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:55.672 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:55.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:55.672 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:55.672 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:55.672 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:55.672 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:55.672 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:55.672 { 00:30:55.672 "params": { 00:30:55.672 "name": "Nvme$subsystem", 00:30:55.672 "trtype": "$TEST_TRANSPORT", 00:30:55.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.672 "adrfam": "ipv4", 00:30:55.672 "trsvcid": "$NVMF_PORT", 00:30:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.672 "hdgst": ${hdgst:-false}, 00:30:55.672 "ddgst": ${ddgst:-false} 00:30:55.672 }, 00:30:55.672 "method": "bdev_nvme_attach_controller" 00:30:55.672 } 00:30:55.672 EOF 00:30:55.672 )") 00:30:55.672 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:55.672 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:55.672 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:55.672 14:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:55.672 "params": { 00:30:55.672 "name": "Nvme0", 00:30:55.672 "trtype": "tcp", 00:30:55.672 "traddr": "10.0.0.2", 00:30:55.672 "adrfam": "ipv4", 00:30:55.672 "trsvcid": "4420", 00:30:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:55.672 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:55.672 "hdgst": false, 00:30:55.672 "ddgst": false 00:30:55.672 }, 00:30:55.672 "method": "bdev_nvme_attach_controller" 00:30:55.672 }' 00:30:55.672 [2024-12-10 14:32:56.339952] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:30:55.672 [2024-12-10 14:32:56.340000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1841591 ] 00:30:55.930 [2024-12-10 14:32:56.421357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.931 [2024-12-10 14:32:56.460942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.931 Running I/O for 10 seconds... 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1244 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1244 -ge 100 ']' 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.582 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:56.582 [2024-12-10 14:32:57.280247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.582 [2024-12-10 14:32:57.280284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.582 [2024-12-10 14:32:57.280293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.582 [2024-12-10 14:32:57.280305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.582 [2024-12-10 14:32:57.280312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.582 [2024-12-10 14:32:57.280318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.582 [2024-12-10 14:32:57.280324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.582 [2024-12-10 14:32:57.280330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.582 [2024-12-10 14:32:57.280336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.582 [2024-12-10 14:32:57.280341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.582 [2024-12-10 14:32:57.280347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.582 [2024-12-10 14:32:57.280353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.280493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fa60 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.283057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.583 [2024-12-10 14:32:57.283092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.283103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.583 [2024-12-10 14:32:57.283111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.283120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.583 [2024-12-10 14:32:57.283126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.283134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.583 [2024-12-10 14:32:57.283141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.283147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71aa0 is same with the state(6) to be set 00:30:56.583 [2024-12-10 14:32:57.284780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.284806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.284820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.284829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.284838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.284846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.284854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.284860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.284868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.284875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.284885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.284893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.284909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.284917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.284925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.284933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.284941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.284949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.284957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.284964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.284971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.284978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.284986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.284993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.285001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.285008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.285017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.285023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.285031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.285037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.285045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.285053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.285061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.285068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.285076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.285082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.285091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.285101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.285111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.285119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.285127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.285133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.285141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.285147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.285156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.285163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.285172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 [2024-12-10 14:32:57.285179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.583 [2024-12-10 14:32:57.285187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.583 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.583 [2024-12-10 14:32:57.285193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:56.584 [2024-12-10 14:32:57.285618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 [2024-12-10 14:32:57.285795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.584 [2024-12-10 14:32:57.285803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.584 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.584 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:56.585 [2024-12-10 14:32:57.286750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:56.585 task offset: 37120 on job bdev=Nvme0n1 fails 00:30:56.585 00:30:56.585 Latency(us) 00:30:56.585 [2024-12-10T13:32:57.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.585 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:56.585 Job: Nvme0n1 ended in about 0.67 seconds with error 00:30:56.585 Verification LBA range: start 0x0 length 0x400 00:30:56.585 Nvme0n1 : 0.67 1965.42 122.84 95.73 0.00 30362.89 1365.33 31207.62 00:30:56.585 [2024-12-10T13:32:57.325Z] =================================================================================================================== 00:30:56.585 [2024-12-10T13:32:57.325Z] Total : 1965.42 122.84 95.73 0.00 30362.89 1365.33 31207.62 00:30:56.585 [2024-12-10 14:32:57.289169] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:56.585 [2024-12-10 14:32:57.289191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d71aa0 (9): Bad file descriptor 00:30:56.585 [2024-12-10 14:32:57.290172] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:56.585 [2024-12-10 14:32:57.290256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:56.585 [2024-12-10 14:32:57.290279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.585 [2024-12-10 14:32:57.290296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:56.585 [2024-12-10 14:32:57.290304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:56.585 [2024-12-10 14:32:57.290311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.585 [2024-12-10 14:32:57.290319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d71aa0 00:30:56.585 [2024-12-10 14:32:57.290339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d71aa0 (9): Bad file descriptor 00:30:56.585 [2024-12-10 14:32:57.290351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:56.585 [2024-12-10 14:32:57.290359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:56.585 [2024-12-10 14:32:57.290367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:56.585 [2024-12-10 14:32:57.290376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:56.585 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.585 14:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:57.982 14:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1841591 00:30:57.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1841591) - No such process 00:30:57.982 14:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:57.982 14:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:57.982 14:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:57.982 14:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:57.982 14:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:57.982 14:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:57.982 14:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:57.982 14:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:57.982 { 00:30:57.982 "params": { 00:30:57.982 "name": "Nvme$subsystem", 00:30:57.982 "trtype": "$TEST_TRANSPORT", 00:30:57.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:57.982 "adrfam": "ipv4", 00:30:57.982 "trsvcid": "$NVMF_PORT", 00:30:57.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:57.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:57.982 "hdgst": ${hdgst:-false}, 00:30:57.982 "ddgst": ${ddgst:-false} 00:30:57.982 }, 00:30:57.982 "method": "bdev_nvme_attach_controller" 00:30:57.982 } 00:30:57.982 EOF 00:30:57.982 )") 00:30:57.982 14:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:57.982 14:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:57.982 14:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:57.982 14:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:57.982 "params": { 00:30:57.982 "name": "Nvme0", 00:30:57.982 "trtype": "tcp", 00:30:57.982 "traddr": "10.0.0.2", 00:30:57.982 "adrfam": "ipv4", 00:30:57.982 "trsvcid": "4420", 00:30:57.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:57.982 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:57.982 "hdgst": false, 00:30:57.982 "ddgst": false 00:30:57.982 }, 00:30:57.982 "method": "bdev_nvme_attach_controller" 00:30:57.982 }' 00:30:57.982 [2024-12-10 14:32:58.352456] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:30:57.982 [2024-12-10 14:32:58.352511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1841878 ] 00:30:57.982 [2024-12-10 14:32:58.435326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.982 [2024-12-10 14:32:58.473911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.241 Running I/O for 1 seconds... 00:30:59.178 2012.00 IOPS, 125.75 MiB/s 00:30:59.178 Latency(us) 00:30:59.178 [2024-12-10T13:32:59.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.178 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:59.178 Verification LBA range: start 0x0 length 0x400 00:30:59.178 Nvme0n1 : 1.01 2050.42 128.15 0.00 0.00 30611.26 2793.08 26963.38 00:30:59.178 [2024-12-10T13:32:59.918Z] =================================================================================================================== 00:30:59.178 [2024-12-10T13:32:59.918Z] Total : 2050.42 128.15 0.00 0.00 30611.26 2793.08 26963.38 00:30:59.437 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:59.437 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:59.437 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:59.437 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:59.437 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:59.437 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:59.437 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:59.437 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:59.437 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:59.437 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:59.437 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:59.437 rmmod nvme_tcp 00:30:59.437 rmmod nvme_fabrics 00:30:59.437 rmmod nvme_keyring 00:30:59.437 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:59.438 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:59.438 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:59.438 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1841330 ']' 00:30:59.438 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1841330 00:30:59.438 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1841330 ']' 00:30:59.438 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1841330 00:30:59.438 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:30:59.438 14:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:59.438 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1841330 00:30:59.438 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:59.438 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:59.438 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1841330' 00:30:59.438 killing process with pid 1841330 00:30:59.438 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1841330 00:30:59.438 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1841330 00:30:59.697 [2024-12-10 14:33:00.205249] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:59.697 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:59.697 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:59.697 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:59.697 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:59.697 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:59.697 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:59.697 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:59.697 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:59.697 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:59.697 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.697 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.697 14:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.603 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:01.603 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:01.603 00:31:01.603 real 0m13.988s 00:31:01.603 user 0m19.427s 00:31:01.603 sys 0m7.141s 00:31:01.603 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:01.603 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:01.603 ************************************ 00:31:01.603 END TEST nvmf_host_management 00:31:01.603 ************************************ 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:01.863 ************************************ 00:31:01.863 START TEST nvmf_lvol 00:31:01.863 ************************************ 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:01.863 * Looking for test storage... 00:31:01.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:01.863 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:01.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.863 --rc genhtml_branch_coverage=1 00:31:01.863 --rc genhtml_function_coverage=1 00:31:01.863 --rc genhtml_legend=1 00:31:01.863 --rc geninfo_all_blocks=1 00:31:01.863 --rc geninfo_unexecuted_blocks=1 00:31:01.863 00:31:01.863 ' 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:01.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.864 --rc genhtml_branch_coverage=1 00:31:01.864 --rc genhtml_function_coverage=1 00:31:01.864 --rc genhtml_legend=1 00:31:01.864 --rc geninfo_all_blocks=1 00:31:01.864 --rc geninfo_unexecuted_blocks=1 00:31:01.864 00:31:01.864 ' 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:01.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.864 --rc genhtml_branch_coverage=1 00:31:01.864 --rc genhtml_function_coverage=1 00:31:01.864 --rc genhtml_legend=1 00:31:01.864 --rc geninfo_all_blocks=1 00:31:01.864 --rc geninfo_unexecuted_blocks=1 00:31:01.864 00:31:01.864 ' 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:01.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.864 --rc genhtml_branch_coverage=1 00:31:01.864 --rc genhtml_function_coverage=1 00:31:01.864 --rc genhtml_legend=1 00:31:01.864 --rc geninfo_all_blocks=1 00:31:01.864 --rc geninfo_unexecuted_blocks=1 00:31:01.864 00:31:01.864 ' 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:01.864 14:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:08.437 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:08.437 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.437 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:08.438 Found net devices under 0000:af:00.0: cvl_0_0 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:08.438 Found net devices under 0000:af:00.1: cvl_0_1 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:08.438 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:08.697 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:08.697 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:08.697 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:08.697 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:08.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:08.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:31:08.697 00:31:08.697 --- 10.0.0.2 ping statistics --- 00:31:08.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.697 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:31:08.697 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:08.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:08.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:31:08.698 00:31:08.698 --- 10.0.0.1 ping statistics --- 00:31:08.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.698 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1846586 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1846586 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1846586 ']' 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:08.698 14:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:08.698 [2024-12-10 14:33:09.352994] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:08.698 [2024-12-10 14:33:09.353887] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:31:08.698 [2024-12-10 14:33:09.353920] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.958 [2024-12-10 14:33:09.440796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:08.958 [2024-12-10 14:33:09.480864] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.958 [2024-12-10 14:33:09.480901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.958 [2024-12-10 14:33:09.480907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.958 [2024-12-10 14:33:09.480913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.958 [2024-12-10 14:33:09.480918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.958 [2024-12-10 14:33:09.482237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.958 [2024-12-10 14:33:09.482274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:08.958 [2024-12-10 14:33:09.482278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.958 [2024-12-10 14:33:09.549357] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:08.958 [2024-12-10 14:33:09.550112] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:08.958 [2024-12-10 14:33:09.550439] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:08.958 [2024-12-10 14:33:09.550474] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:09.527 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:09.527 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:31:09.527 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:09.527 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:09.527 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:09.527 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:09.527 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:09.787 [2024-12-10 14:33:10.387096] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:09.787 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:10.046 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:10.046 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:10.305 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:10.306 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:10.564 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:10.564 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d82cb069-eaca-4a59-b3f8-d2f82d1fcd75 00:31:10.564 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d82cb069-eaca-4a59-b3f8-d2f82d1fcd75 lvol 20 00:31:10.823 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3e6276d9-696f-4339-a6c2-b201b828a132 00:31:10.823 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:11.082 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3e6276d9-696f-4339-a6c2-b201b828a132 00:31:11.342 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:11.342 [2024-12-10 14:33:11.990979] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:11.342 14:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:11.601 14:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1847071 00:31:11.601 14:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:11.601 14:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:12.536 14:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3e6276d9-696f-4339-a6c2-b201b828a132 MY_SNAPSHOT 00:31:12.796 14:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a3b224a2-4f6c-45d6-84d1-b7ec4068691b 00:31:12.796 14:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3e6276d9-696f-4339-a6c2-b201b828a132 30 00:31:13.055 14:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a3b224a2-4f6c-45d6-84d1-b7ec4068691b MY_CLONE 00:31:13.314 14:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=07475ff8-a9dd-4d64-96ff-bacefc081ed3 00:31:13.314 14:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 07475ff8-a9dd-4d64-96ff-bacefc081ed3 00:31:13.883 14:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1847071 00:31:22.005 Initializing NVMe Controllers 00:31:22.005 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:22.005 Controller IO queue size 128, less than required. 00:31:22.005 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:22.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:22.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:22.005 Initialization complete. Launching workers. 00:31:22.005 ======================================================== 00:31:22.005 Latency(us) 00:31:22.005 Device Information : IOPS MiB/s Average min max 00:31:22.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12444.10 48.61 10288.42 1574.85 57507.09 00:31:22.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12581.00 49.14 10177.69 3407.49 56517.71 00:31:22.005 ======================================================== 00:31:22.005 Total : 25025.10 97.75 10232.75 1574.85 57507.09 00:31:22.005 00:31:22.005 14:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:22.264 14:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3e6276d9-696f-4339-a6c2-b201b828a132 00:31:22.524 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d82cb069-eaca-4a59-b3f8-d2f82d1fcd75 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:22.783 rmmod nvme_tcp 00:31:22.783 rmmod nvme_fabrics 00:31:22.783 rmmod nvme_keyring 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1846586 ']' 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1846586 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1846586 ']' 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1846586 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1846586 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1846586' 00:31:22.783 killing process with pid 1846586 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1846586 00:31:22.783 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1846586 00:31:23.043 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:23.043 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:23.043 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:23.043 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:23.043 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:23.043 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:23.043 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:23.043 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:23.043 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:23.043 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.043 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.043 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.950 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:24.950 00:31:24.950 real 0m23.269s 00:31:24.950 user 0m55.948s 00:31:24.950 sys 0m10.665s 00:31:24.950 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:24.950 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:24.950 ************************************ 00:31:24.950 END TEST nvmf_lvol 00:31:24.950 ************************************ 00:31:24.950 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:24.950 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:24.950 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:24.950 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:25.210 ************************************ 00:31:25.210 START TEST nvmf_lvs_grow 00:31:25.210 ************************************ 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:25.210 * Looking for test storage... 00:31:25.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:25.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.210 --rc genhtml_branch_coverage=1 00:31:25.210 --rc genhtml_function_coverage=1 00:31:25.210 --rc genhtml_legend=1 00:31:25.210 --rc geninfo_all_blocks=1 00:31:25.210 --rc geninfo_unexecuted_blocks=1 00:31:25.210 00:31:25.210 ' 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:25.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.210 --rc genhtml_branch_coverage=1 00:31:25.210 --rc genhtml_function_coverage=1 00:31:25.210 --rc genhtml_legend=1 00:31:25.210 --rc geninfo_all_blocks=1 00:31:25.210 --rc geninfo_unexecuted_blocks=1 00:31:25.210 00:31:25.210 ' 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:25.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.210 --rc genhtml_branch_coverage=1 00:31:25.210 --rc genhtml_function_coverage=1 00:31:25.210 --rc genhtml_legend=1 00:31:25.210 --rc geninfo_all_blocks=1 00:31:25.210 --rc geninfo_unexecuted_blocks=1 00:31:25.210 00:31:25.210 ' 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:25.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.210 --rc genhtml_branch_coverage=1 00:31:25.210 --rc genhtml_function_coverage=1 00:31:25.210 --rc genhtml_legend=1 00:31:25.210 --rc geninfo_all_blocks=1 00:31:25.210 --rc geninfo_unexecuted_blocks=1 00:31:25.210 00:31:25.210 ' 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.210 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:25.211 14:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:31.784 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:31.784 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:31.784 Found net devices under 0000:af:00.0: cvl_0_0 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:31.784 Found net devices under 0000:af:00.1: cvl_0_1 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:31.784 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:32.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:31:32.044 00:31:32.044 --- 10.0.0.2 ping statistics --- 00:31:32.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.044 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:31:32.044 00:31:32.044 --- 10.0.0.1 ping statistics --- 00:31:32.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.044 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1852670 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1852670 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1852670 ']' 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:32.044 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:32.044 [2024-12-10 14:33:32.771412] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:32.044 [2024-12-10 14:33:32.772316] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:31:32.044 [2024-12-10 14:33:32.772353] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:32.304 [2024-12-10 14:33:32.859659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.304 [2024-12-10 14:33:32.896839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:32.304 [2024-12-10 14:33:32.896875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:32.304 [2024-12-10 14:33:32.896882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:32.304 [2024-12-10 14:33:32.896888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:32.304 [2024-12-10 14:33:32.896893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:32.304 [2024-12-10 14:33:32.897414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.304 [2024-12-10 14:33:32.964835] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:32.304 [2024-12-10 14:33:32.965038] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:32.304 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:32.304 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:31:32.304 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:32.304 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:32.304 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:32.304 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:32.304 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:32.563 [2024-12-10 14:33:33.202085] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.563 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:32.563 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:32.563 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:32.563 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:32.563 ************************************ 00:31:32.563 START TEST lvs_grow_clean 00:31:32.563 ************************************ 00:31:32.563 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:31:32.564 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:32.564 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:32.564 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:32.564 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:32.564 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:32.564 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:32.564 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:32.564 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:32.564 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:32.822 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:32.822 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:33.081 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=54c78042-af44-4d1f-86f2-f2b1bc8d1338 00:31:33.081 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54c78042-af44-4d1f-86f2-f2b1bc8d1338 00:31:33.081 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:33.339 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:33.339 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:33.339 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 54c78042-af44-4d1f-86f2-f2b1bc8d1338 lvol 150 00:31:33.339 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=bdcda533-c085-4729-8265-c95a99878807 00:31:33.339 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:33.340 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:33.598 [2024-12-10 14:33:34.241809] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:33.598 [2024-12-10 14:33:34.241937] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:33.598 true 00:31:33.598 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:33.598 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54c78042-af44-4d1f-86f2-f2b1bc8d1338 00:31:33.857 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:33.857 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:34.116 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdcda533-c085-4729-8265-c95a99878807 00:31:34.116 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:34.375 [2024-12-10 14:33:35.014307] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:34.375 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:34.633 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1853152 00:31:34.633 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:34.633 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:34.633 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1853152 /var/tmp/bdevperf.sock 00:31:34.633 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1853152 ']' 00:31:34.633 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:34.633 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:34.633 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:34.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:34.633 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:34.633 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:34.633 [2024-12-10 14:33:35.276662] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:31:34.633 [2024-12-10 14:33:35.276713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1853152 ] 00:31:34.633 [2024-12-10 14:33:35.354910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.891 [2024-12-10 14:33:35.396102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.891 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:34.891 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:31:34.892 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:35.150 Nvme0n1 00:31:35.150 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:35.408 [ 00:31:35.408 { 00:31:35.408 "name": "Nvme0n1", 00:31:35.408 "aliases": [ 00:31:35.408 "bdcda533-c085-4729-8265-c95a99878807" 00:31:35.408 ], 00:31:35.408 "product_name": "NVMe disk", 00:31:35.408 "block_size": 4096, 00:31:35.408 "num_blocks": 38912, 00:31:35.408 "uuid": "bdcda533-c085-4729-8265-c95a99878807", 00:31:35.408 "numa_id": 1, 00:31:35.408 "assigned_rate_limits": { 00:31:35.408 "rw_ios_per_sec": 0, 00:31:35.409 "rw_mbytes_per_sec": 0, 00:31:35.409 "r_mbytes_per_sec": 0, 00:31:35.409 "w_mbytes_per_sec": 0 00:31:35.409 }, 00:31:35.409 "claimed": false, 00:31:35.409 "zoned": false, 00:31:35.409 "supported_io_types": { 00:31:35.409 "read": true, 00:31:35.409 "write": true, 00:31:35.409 "unmap": true, 00:31:35.409 "flush": true, 00:31:35.409 "reset": true, 00:31:35.409 "nvme_admin": true, 00:31:35.409 "nvme_io": true, 00:31:35.409 "nvme_io_md": false, 00:31:35.409 "write_zeroes": true, 00:31:35.409 "zcopy": false, 00:31:35.409 "get_zone_info": false, 00:31:35.409 "zone_management": false, 00:31:35.409 "zone_append": false, 00:31:35.409 "compare": true, 00:31:35.409 "compare_and_write": true, 00:31:35.409 "abort": true, 00:31:35.409 "seek_hole": false, 00:31:35.409 "seek_data": false, 00:31:35.409 "copy": true, 00:31:35.409 "nvme_iov_md": false 00:31:35.409 }, 00:31:35.409 "memory_domains": [ 00:31:35.409 { 00:31:35.409 "dma_device_id": "system", 00:31:35.409 "dma_device_type": 1 00:31:35.409 } 00:31:35.409 ], 00:31:35.409 "driver_specific": { 00:31:35.409 "nvme": [ 00:31:35.409 { 00:31:35.409 "trid": { 00:31:35.409 "trtype": "TCP", 00:31:35.409 "adrfam": "IPv4", 00:31:35.409 "traddr": "10.0.0.2", 00:31:35.409 "trsvcid": "4420", 00:31:35.409 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:35.409 }, 00:31:35.409 "ctrlr_data": { 00:31:35.409 "cntlid": 1, 00:31:35.409 "vendor_id": "0x8086", 00:31:35.409 "model_number": "SPDK bdev Controller", 00:31:35.409 "serial_number": "SPDK0", 00:31:35.409 "firmware_revision": "25.01", 00:31:35.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:35.409 "oacs": { 00:31:35.409 "security": 0, 00:31:35.409 "format": 0, 00:31:35.409 "firmware": 0, 00:31:35.409 "ns_manage": 0 00:31:35.409 }, 00:31:35.409 "multi_ctrlr": true, 00:31:35.409 "ana_reporting": false 00:31:35.409 }, 00:31:35.409 "vs": { 00:31:35.409 "nvme_version": "1.3" 00:31:35.409 }, 00:31:35.409 "ns_data": { 00:31:35.409 "id": 1, 00:31:35.409 "can_share": true 00:31:35.409 } 00:31:35.409 } 00:31:35.409 ], 00:31:35.409 "mp_policy": "active_passive" 00:31:35.409 } 00:31:35.409 } 00:31:35.409 ] 00:31:35.409 14:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1853380 00:31:35.409 14:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:35.409 14:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:35.409 Running I/O for 10 seconds... 00:31:36.785 Latency(us) 00:31:36.785 [2024-12-10T13:33:37.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:36.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:36.785 Nvme0n1 : 1.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:31:36.785 [2024-12-10T13:33:37.525Z] =================================================================================================================== 00:31:36.785 [2024-12-10T13:33:37.525Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:31:36.785 00:31:37.352 14:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 54c78042-af44-4d1f-86f2-f2b1bc8d1338 00:31:37.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:37.610 Nvme0n1 : 2.00 23431.50 91.53 0.00 0.00 0.00 0.00 0.00 00:31:37.610 [2024-12-10T13:33:38.350Z] =================================================================================================================== 00:31:37.610 [2024-12-10T13:33:38.350Z] Total : 23431.50 91.53 0.00 0.00 0.00 0.00 0.00 00:31:37.610 00:31:37.610 true 00:31:37.610 14:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54c78042-af44-4d1f-86f2-f2b1bc8d1338 00:31:37.610 14:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:37.868 14:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:37.868 14:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:37.868 14:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1853380 00:31:38.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:38.435 Nvme0n1 : 3.00 23537.33 91.94 0.00 0.00 0.00 0.00 0.00 00:31:38.435 [2024-12-10T13:33:39.175Z] =================================================================================================================== 00:31:38.435 [2024-12-10T13:33:39.175Z] Total : 23537.33 91.94 0.00 0.00 0.00 0.00 0.00 00:31:38.435 00:31:39.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:39.811 Nvme0n1 : 4.00 23590.25 92.15 0.00 0.00 0.00 0.00 0.00 00:31:39.811 [2024-12-10T13:33:40.551Z] =================================================================================================================== 00:31:39.811 [2024-12-10T13:33:40.551Z] Total : 23590.25 92.15 0.00 0.00 0.00 0.00 0.00 00:31:39.811 00:31:40.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:40.747 Nvme0n1 : 5.00 23672.80 92.47 0.00 0.00 0.00 0.00 0.00 00:31:40.747 [2024-12-10T13:33:41.487Z] =================================================================================================================== 00:31:40.747 [2024-12-10T13:33:41.487Z] Total : 23672.80 92.47 0.00 0.00 0.00 0.00 0.00 00:31:40.747 00:31:41.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:41.682 Nvme0n1 : 6.00 23685.50 92.52 0.00 0.00 0.00 0.00 0.00 00:31:41.682 [2024-12-10T13:33:42.422Z] =================================================================================================================== 00:31:41.682 [2024-12-10T13:33:42.422Z] Total : 23685.50 92.52 0.00 0.00 0.00 0.00 0.00 00:31:41.682 00:31:42.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:42.618 Nvme0n1 : 7.00 23730.86 92.70 0.00 0.00 0.00 0.00 0.00 00:31:42.618 [2024-12-10T13:33:43.358Z] =================================================================================================================== 00:31:42.618 [2024-12-10T13:33:43.358Z] Total : 23730.86 92.70 0.00 0.00 0.00 0.00 0.00 00:31:42.618 00:31:43.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:43.553 Nvme0n1 : 8.00 23780.75 92.89 0.00 0.00 0.00 0.00 0.00 00:31:43.553 [2024-12-10T13:33:44.293Z] =================================================================================================================== 00:31:43.553 [2024-12-10T13:33:44.293Z] Total : 23780.75 92.89 0.00 0.00 0.00 0.00 0.00 00:31:43.553 00:31:44.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:44.489 Nvme0n1 : 9.00 23805.44 92.99 0.00 0.00 0.00 0.00 0.00 00:31:44.489 [2024-12-10T13:33:45.229Z] =================================================================================================================== 00:31:44.489 [2024-12-10T13:33:45.229Z] Total : 23805.44 92.99 0.00 0.00 0.00 0.00 0.00 00:31:44.489 00:31:45.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:45.421 Nvme0n1 : 10.00 23825.20 93.07 0.00 0.00 0.00 0.00 0.00 00:31:45.421 [2024-12-10T13:33:46.161Z] =================================================================================================================== 00:31:45.421 [2024-12-10T13:33:46.161Z] Total : 23825.20 93.07 0.00 0.00 0.00 0.00 0.00 00:31:45.421 00:31:45.421 00:31:45.421 Latency(us) 00:31:45.421 [2024-12-10T13:33:46.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:45.421 Nvme0n1 : 10.00 23827.76 93.08 0.00 0.00 5368.79 5055.63 28211.69 00:31:45.421 [2024-12-10T13:33:46.161Z] =================================================================================================================== 00:31:45.421 [2024-12-10T13:33:46.161Z] Total : 23827.76 93.08 0.00 0.00 5368.79 5055.63 28211.69 00:31:45.421 { 00:31:45.421 "results": [ 00:31:45.421 { 00:31:45.421 "job": "Nvme0n1", 00:31:45.421 "core_mask": "0x2", 00:31:45.421 "workload": "randwrite", 00:31:45.421 "status": "finished", 00:31:45.421 "queue_depth": 128, 00:31:45.421 "io_size": 4096, 00:31:45.421 "runtime": 10.004299, 00:31:45.421 "iops": 23827.75644750322, 00:31:45.421 "mibps": 93.07717362305945, 00:31:45.421 "io_failed": 0, 00:31:45.421 "io_timeout": 0, 00:31:45.421 "avg_latency_us": 5368.785422746395, 00:31:45.421 "min_latency_us": 5055.634285714285, 00:31:45.421 "max_latency_us": 28211.687619047618 00:31:45.421 } 00:31:45.421 ], 00:31:45.421 "core_count": 1 00:31:45.421 } 00:31:45.680 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1853152 00:31:45.680 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1853152 ']' 00:31:45.680 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1853152 00:31:45.680 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:31:45.680 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:45.680 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1853152 00:31:45.680 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:45.680 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:45.680 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1853152' 00:31:45.680 killing process with pid 1853152 00:31:45.680 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1853152 00:31:45.680 Received shutdown signal, test time was about 10.000000 seconds 00:31:45.680 00:31:45.680 Latency(us) 00:31:45.680 [2024-12-10T13:33:46.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.680 [2024-12-10T13:33:46.420Z] =================================================================================================================== 00:31:45.680 [2024-12-10T13:33:46.420Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:45.680 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1853152 00:31:45.680 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:45.938 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:46.195 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54c78042-af44-4d1f-86f2-f2b1bc8d1338 00:31:46.195 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:46.454 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:46.454 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:46.454 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:46.454 [2024-12-10 14:33:47.161908] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:46.713 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54c78042-af44-4d1f-86f2-f2b1bc8d1338 00:31:46.713 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:31:46.713 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54c78042-af44-4d1f-86f2-f2b1bc8d1338 00:31:46.713 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:46.713 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:46.713 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:46.713 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:46.713 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:46.713 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:46.713 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:46.713 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:46.713 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54c78042-af44-4d1f-86f2-f2b1bc8d1338 00:31:46.713 request: 00:31:46.713 { 00:31:46.713 "uuid": "54c78042-af44-4d1f-86f2-f2b1bc8d1338", 00:31:46.713 "method": "bdev_lvol_get_lvstores", 00:31:46.713 "req_id": 1 00:31:46.713 } 00:31:46.713 Got JSON-RPC error response 00:31:46.713 response: 00:31:46.713 { 00:31:46.713 "code": -19, 00:31:46.713 "message": "No such device" 00:31:46.713 } 00:31:46.713 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:31:46.713 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:46.713 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:46.713 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:46.713 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:46.971 aio_bdev 00:31:46.971 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bdcda533-c085-4729-8265-c95a99878807 00:31:46.971 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=bdcda533-c085-4729-8265-c95a99878807 00:31:46.971 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:46.971 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:31:46.971 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:46.971 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:46.971 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:47.229 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bdcda533-c085-4729-8265-c95a99878807 -t 2000 00:31:47.229 [ 00:31:47.229 { 00:31:47.229 "name": "bdcda533-c085-4729-8265-c95a99878807", 00:31:47.229 "aliases": [ 00:31:47.229 "lvs/lvol" 00:31:47.229 ], 00:31:47.229 "product_name": "Logical Volume", 00:31:47.229 "block_size": 4096, 00:31:47.229 "num_blocks": 38912, 00:31:47.229 "uuid": "bdcda533-c085-4729-8265-c95a99878807", 00:31:47.229 "assigned_rate_limits": { 00:31:47.229 "rw_ios_per_sec": 0, 00:31:47.229 "rw_mbytes_per_sec": 0, 00:31:47.229 "r_mbytes_per_sec": 0, 00:31:47.229 "w_mbytes_per_sec": 0 00:31:47.229 }, 00:31:47.229 "claimed": false, 00:31:47.229 "zoned": false, 00:31:47.229 "supported_io_types": { 00:31:47.229 "read": true, 00:31:47.229 "write": true, 00:31:47.229 "unmap": true, 00:31:47.229 "flush": false, 00:31:47.229 "reset": true, 00:31:47.229 "nvme_admin": false, 00:31:47.229 "nvme_io": false, 00:31:47.229 "nvme_io_md": false, 00:31:47.229 "write_zeroes": true, 00:31:47.229 "zcopy": false, 00:31:47.229 "get_zone_info": false, 00:31:47.229 "zone_management": false, 00:31:47.229 "zone_append": false, 00:31:47.229 "compare": false, 00:31:47.229 "compare_and_write": false, 00:31:47.229 "abort": false, 00:31:47.229 "seek_hole": true, 00:31:47.229 "seek_data": true, 00:31:47.229 "copy": false, 00:31:47.229 "nvme_iov_md": false 00:31:47.229 }, 00:31:47.229 "driver_specific": { 00:31:47.229 "lvol": { 00:31:47.229 "lvol_store_uuid": "54c78042-af44-4d1f-86f2-f2b1bc8d1338", 00:31:47.229 "base_bdev": "aio_bdev", 00:31:47.229 "thin_provision": false, 00:31:47.229 "num_allocated_clusters": 38, 00:31:47.229 "snapshot": false, 00:31:47.229 "clone": false, 00:31:47.229 "esnap_clone": false 00:31:47.229 } 00:31:47.229 } 00:31:47.229 } 00:31:47.229 ] 00:31:47.229 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:31:47.229 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54c78042-af44-4d1f-86f2-f2b1bc8d1338 00:31:47.229 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:47.488 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:47.488 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54c78042-af44-4d1f-86f2-f2b1bc8d1338 00:31:47.488 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:47.747 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:47.747 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bdcda533-c085-4729-8265-c95a99878807 00:31:48.006 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 54c78042-af44-4d1f-86f2-f2b1bc8d1338 00:31:48.265 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:48.265 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:48.265 00:31:48.265 real 0m15.688s 00:31:48.265 user 0m15.216s 00:31:48.265 sys 0m1.465s 00:31:48.265 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:48.265 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:48.265 ************************************ 00:31:48.265 END TEST lvs_grow_clean 00:31:48.265 ************************************ 00:31:48.265 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:48.265 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:48.265 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:48.265 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:48.524 ************************************ 00:31:48.524 START TEST lvs_grow_dirty 00:31:48.524 ************************************ 00:31:48.524 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:31:48.524 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:48.524 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:48.524 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:48.524 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:48.524 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:48.524 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:48.524 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:48.524 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:48.524 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:48.524 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:48.524 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:48.783 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=717afe87-5800-4d6e-9358-8b6925156cef 00:31:48.783 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 717afe87-5800-4d6e-9358-8b6925156cef 00:31:48.783 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:49.042 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:49.042 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:49.042 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 717afe87-5800-4d6e-9358-8b6925156cef lvol 150 00:31:49.301 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7fa3b09c-7e0a-44c4-a982-9baa0e258f8f 00:31:49.301 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:49.301 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:49.301 [2024-12-10 14:33:50.029815] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:49.301 [2024-12-10 14:33:50.029949] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:49.301 true 00:31:49.560 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 717afe87-5800-4d6e-9358-8b6925156cef 00:31:49.560 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:49.560 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:49.560 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:49.819 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7fa3b09c-7e0a-44c4-a982-9baa0e258f8f 00:31:50.103 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:50.103 [2024-12-10 14:33:50.778194] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.103 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:50.428 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1855699 00:31:50.428 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:50.428 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:50.428 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1855699 /var/tmp/bdevperf.sock 00:31:50.428 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1855699 ']' 00:31:50.428 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:50.428 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:50.428 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:50.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:50.428 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:50.428 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:50.428 [2024-12-10 14:33:51.019391] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:31:50.428 [2024-12-10 14:33:51.019441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1855699 ] 00:31:50.428 [2024-12-10 14:33:51.099702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.718 [2024-12-10 14:33:51.141989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.718 14:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:50.718 14:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:50.718 14:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:50.977 Nvme0n1 00:31:50.977 14:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:50.977 [ 00:31:50.977 { 00:31:50.977 "name": "Nvme0n1", 00:31:50.977 "aliases": [ 00:31:50.977 "7fa3b09c-7e0a-44c4-a982-9baa0e258f8f" 00:31:50.977 ], 00:31:50.977 "product_name": "NVMe disk", 00:31:50.977 "block_size": 4096, 00:31:50.977 "num_blocks": 38912, 00:31:50.977 "uuid": "7fa3b09c-7e0a-44c4-a982-9baa0e258f8f", 00:31:50.977 "numa_id": 1, 00:31:50.977 "assigned_rate_limits": { 00:31:50.977 "rw_ios_per_sec": 0, 00:31:50.977 "rw_mbytes_per_sec": 0, 00:31:50.977 "r_mbytes_per_sec": 0, 00:31:50.977 "w_mbytes_per_sec": 0 00:31:50.977 }, 00:31:50.977 "claimed": false, 00:31:50.977 "zoned": false, 00:31:50.977 "supported_io_types": { 00:31:50.977 "read": true, 00:31:50.977 "write": true, 00:31:50.977 "unmap": true, 00:31:50.977 "flush": true, 00:31:50.977 "reset": true, 00:31:50.977 "nvme_admin": true, 00:31:50.977 "nvme_io": true, 00:31:50.977 "nvme_io_md": false, 00:31:50.977 "write_zeroes": true, 00:31:50.977 "zcopy": false, 00:31:50.977 "get_zone_info": false, 00:31:50.977 "zone_management": false, 00:31:50.977 "zone_append": false, 00:31:50.977 "compare": true, 00:31:50.977 "compare_and_write": true, 00:31:50.977 "abort": true, 00:31:50.977 "seek_hole": false, 00:31:50.977 "seek_data": false, 00:31:50.977 "copy": true, 00:31:50.977 "nvme_iov_md": false 00:31:50.977 }, 00:31:50.977 "memory_domains": [ 00:31:50.977 { 00:31:50.977 "dma_device_id": "system", 00:31:50.977 "dma_device_type": 1 00:31:50.977 } 00:31:50.977 ], 00:31:50.977 "driver_specific": { 00:31:50.977 "nvme": [ 00:31:50.977 { 00:31:50.977 "trid": { 00:31:50.977 "trtype": "TCP", 00:31:50.977 "adrfam": "IPv4", 00:31:50.977 "traddr": "10.0.0.2", 00:31:50.977 "trsvcid": "4420", 00:31:50.977 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:50.977 }, 00:31:50.977 "ctrlr_data": { 00:31:50.977 "cntlid": 1, 00:31:50.977 "vendor_id": "0x8086", 00:31:50.977 "model_number": "SPDK bdev Controller", 00:31:50.977 "serial_number": "SPDK0", 00:31:50.977 "firmware_revision": "25.01", 00:31:50.977 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:50.977 "oacs": { 00:31:50.977 "security": 0, 00:31:50.977 "format": 0, 00:31:50.977 "firmware": 0, 00:31:50.977 "ns_manage": 0 00:31:50.977 }, 00:31:50.977 "multi_ctrlr": true, 00:31:50.977 "ana_reporting": false 00:31:50.977 }, 00:31:50.977 "vs": { 00:31:50.977 "nvme_version": "1.3" 00:31:50.977 }, 00:31:50.977 "ns_data": { 00:31:50.977 "id": 1, 00:31:50.977 "can_share": true 00:31:50.977 } 00:31:50.977 } 00:31:50.977 ], 00:31:50.977 "mp_policy": "active_passive" 00:31:50.977 } 00:31:50.977 } 00:31:50.977 ] 00:31:50.977 14:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1855929 00:31:50.977 14:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:50.977 14:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:51.236 Running I/O for 10 seconds... 00:31:52.172 Latency(us) 00:31:52.172 [2024-12-10T13:33:52.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:52.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:52.172 Nvme0n1 : 1.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:31:52.172 [2024-12-10T13:33:52.912Z] =================================================================================================================== 00:31:52.172 [2024-12-10T13:33:52.912Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:31:52.172 00:31:53.109 14:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 717afe87-5800-4d6e-9358-8b6925156cef 00:31:53.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:53.109 Nvme0n1 : 2.00 23431.50 91.53 0.00 0.00 0.00 0.00 0.00 00:31:53.109 [2024-12-10T13:33:53.849Z] =================================================================================================================== 00:31:53.109 [2024-12-10T13:33:53.849Z] Total : 23431.50 91.53 0.00 0.00 0.00 0.00 0.00 00:31:53.109 00:31:53.368 true 00:31:53.368 14:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 717afe87-5800-4d6e-9358-8b6925156cef 00:31:53.368 14:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:53.627 14:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:53.627 14:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:53.627 14:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1855929 00:31:54.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:54.194 Nvme0n1 : 3.00 23431.67 91.53 0.00 0.00 0.00 0.00 0.00 00:31:54.194 [2024-12-10T13:33:54.934Z] =================================================================================================================== 00:31:54.194 [2024-12-10T13:33:54.934Z] Total : 23431.67 91.53 0.00 0.00 0.00 0.00 0.00 00:31:54.194 00:31:55.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:55.130 Nvme0n1 : 4.00 23570.75 92.07 0.00 0.00 0.00 0.00 0.00 00:31:55.130 [2024-12-10T13:33:55.870Z] =================================================================================================================== 00:31:55.130 [2024-12-10T13:33:55.870Z] Total : 23570.75 92.07 0.00 0.00 0.00 0.00 0.00 00:31:55.130 00:31:56.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:56.507 Nvme0n1 : 5.00 23657.20 92.41 0.00 0.00 0.00 0.00 0.00 00:31:56.507 [2024-12-10T13:33:57.247Z] =================================================================================================================== 00:31:56.507 [2024-12-10T13:33:57.247Z] Total : 23657.20 92.41 0.00 0.00 0.00 0.00 0.00 00:31:56.507 00:31:57.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:57.074 Nvme0n1 : 6.00 23714.83 92.64 0.00 0.00 0.00 0.00 0.00 00:31:57.074 [2024-12-10T13:33:57.814Z] =================================================================================================================== 00:31:57.074 [2024-12-10T13:33:57.814Z] Total : 23714.83 92.64 0.00 0.00 0.00 0.00 0.00 00:31:57.074 00:31:58.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:58.452 Nvme0n1 : 7.00 23756.00 92.80 0.00 0.00 0.00 0.00 0.00 00:31:58.452 [2024-12-10T13:33:59.192Z] =================================================================================================================== 00:31:58.452 [2024-12-10T13:33:59.192Z] Total : 23756.00 92.80 0.00 0.00 0.00 0.00 0.00 00:31:58.452 00:31:59.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:59.388 Nvme0n1 : 8.00 23789.00 92.93 0.00 0.00 0.00 0.00 0.00 00:31:59.388 [2024-12-10T13:34:00.128Z] =================================================================================================================== 00:31:59.388 [2024-12-10T13:34:00.128Z] Total : 23789.00 92.93 0.00 0.00 0.00 0.00 0.00 00:31:59.388 00:32:00.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:00.324 Nvme0n1 : 9.00 23756.33 92.80 0.00 0.00 0.00 0.00 0.00 00:32:00.324 [2024-12-10T13:34:01.064Z] =================================================================================================================== 00:32:00.324 [2024-12-10T13:34:01.064Z] Total : 23756.33 92.80 0.00 0.00 0.00 0.00 0.00 00:32:00.324 00:32:01.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:01.261 Nvme0n1 : 10.00 23781.00 92.89 0.00 0.00 0.00 0.00 0.00 00:32:01.261 [2024-12-10T13:34:02.001Z] =================================================================================================================== 00:32:01.261 [2024-12-10T13:34:02.001Z] Total : 23781.00 92.89 0.00 0.00 0.00 0.00 0.00 00:32:01.261 00:32:01.261 00:32:01.261 Latency(us) 00:32:01.261 [2024-12-10T13:34:02.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:01.261 Nvme0n1 : 10.00 23786.46 92.92 0.00 0.00 5378.26 3120.76 26588.89 00:32:01.261 [2024-12-10T13:34:02.001Z] =================================================================================================================== 00:32:01.261 [2024-12-10T13:34:02.001Z] Total : 23786.46 92.92 0.00 0.00 5378.26 3120.76 26588.89 00:32:01.261 { 00:32:01.261 "results": [ 00:32:01.261 { 00:32:01.261 "job": "Nvme0n1", 00:32:01.261 "core_mask": "0x2", 00:32:01.261 "workload": "randwrite", 00:32:01.261 "status": "finished", 00:32:01.261 "queue_depth": 128, 00:32:01.261 "io_size": 4096, 00:32:01.261 "runtime": 10.003086, 00:32:01.261 "iops": 23786.45949859873, 00:32:01.261 "mibps": 92.9158574164013, 00:32:01.261 "io_failed": 0, 00:32:01.261 "io_timeout": 0, 00:32:01.261 "avg_latency_us": 5378.257202912803, 00:32:01.261 "min_latency_us": 3120.7619047619046, 00:32:01.261 "max_latency_us": 26588.891428571427 00:32:01.261 } 00:32:01.261 ], 00:32:01.261 "core_count": 1 00:32:01.261 } 00:32:01.261 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1855699 00:32:01.261 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1855699 ']' 00:32:01.261 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1855699 00:32:01.261 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:32:01.261 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:01.261 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1855699 00:32:01.261 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:01.261 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:01.261 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1855699' 00:32:01.261 killing process with pid 1855699 00:32:01.261 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1855699 00:32:01.261 Received shutdown signal, test time was about 10.000000 seconds 00:32:01.261 00:32:01.261 Latency(us) 00:32:01.261 [2024-12-10T13:34:02.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.261 [2024-12-10T13:34:02.001Z] =================================================================================================================== 00:32:01.261 [2024-12-10T13:34:02.001Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:01.261 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1855699 00:32:01.520 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:01.520 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:01.779 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 717afe87-5800-4d6e-9358-8b6925156cef 00:32:01.779 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1852670 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1852670 00:32:02.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1852670 Killed "${NVMF_APP[@]}" "$@" 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1857531 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1857531 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1857531 ']' 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:02.038 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:02.038 [2024-12-10 14:34:02.742182] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:02.038 [2024-12-10 14:34:02.743105] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:32:02.038 [2024-12-10 14:34:02.743141] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:02.297 [2024-12-10 14:34:02.827633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.298 [2024-12-10 14:34:02.866086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:02.298 [2024-12-10 14:34:02.866121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:02.298 [2024-12-10 14:34:02.866128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:02.298 [2024-12-10 14:34:02.866135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:02.298 [2024-12-10 14:34:02.866140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:02.298 [2024-12-10 14:34:02.866696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.298 [2024-12-10 14:34:02.934012] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:02.298 [2024-12-10 14:34:02.934212] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:02.866 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:02.866 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:02.866 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:02.866 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:02.866 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:03.125 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:03.125 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:03.125 [2024-12-10 14:34:03.784074] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:03.125 [2024-12-10 14:34:03.784298] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:03.125 [2024-12-10 14:34:03.784386] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:03.125 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:03.125 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7fa3b09c-7e0a-44c4-a982-9baa0e258f8f 00:32:03.125 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7fa3b09c-7e0a-44c4-a982-9baa0e258f8f 00:32:03.125 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:03.125 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:03.125 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:03.125 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:03.125 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:03.384 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7fa3b09c-7e0a-44c4-a982-9baa0e258f8f -t 2000 00:32:03.643 [ 00:32:03.643 { 00:32:03.643 "name": "7fa3b09c-7e0a-44c4-a982-9baa0e258f8f", 00:32:03.643 "aliases": [ 00:32:03.643 "lvs/lvol" 00:32:03.643 ], 00:32:03.643 "product_name": "Logical Volume", 00:32:03.643 "block_size": 4096, 00:32:03.643 "num_blocks": 38912, 00:32:03.643 "uuid": "7fa3b09c-7e0a-44c4-a982-9baa0e258f8f", 00:32:03.643 "assigned_rate_limits": { 00:32:03.643 "rw_ios_per_sec": 0, 00:32:03.643 "rw_mbytes_per_sec": 0, 00:32:03.643 "r_mbytes_per_sec": 0, 00:32:03.643 "w_mbytes_per_sec": 0 00:32:03.643 }, 00:32:03.643 "claimed": false, 00:32:03.643 "zoned": false, 00:32:03.643 "supported_io_types": { 00:32:03.643 "read": true, 00:32:03.643 "write": true, 00:32:03.643 "unmap": true, 00:32:03.643 "flush": false, 00:32:03.643 "reset": true, 00:32:03.643 "nvme_admin": false, 00:32:03.643 "nvme_io": false, 00:32:03.643 "nvme_io_md": false, 00:32:03.643 "write_zeroes": true, 00:32:03.643 "zcopy": false, 00:32:03.643 "get_zone_info": false, 00:32:03.643 "zone_management": false, 00:32:03.643 "zone_append": false, 00:32:03.643 "compare": false, 00:32:03.643 "compare_and_write": false, 00:32:03.643 "abort": false, 00:32:03.643 "seek_hole": true, 00:32:03.643 "seek_data": true, 00:32:03.643 "copy": false, 00:32:03.643 "nvme_iov_md": false 00:32:03.643 }, 00:32:03.643 "driver_specific": { 00:32:03.643 "lvol": { 00:32:03.643 "lvol_store_uuid": "717afe87-5800-4d6e-9358-8b6925156cef", 00:32:03.643 "base_bdev": "aio_bdev", 00:32:03.643 "thin_provision": false, 00:32:03.643 "num_allocated_clusters": 38, 00:32:03.643 "snapshot": false, 00:32:03.643 "clone": false, 00:32:03.643 "esnap_clone": false 00:32:03.643 } 00:32:03.643 } 00:32:03.643 } 00:32:03.643 ] 00:32:03.643 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:03.643 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 717afe87-5800-4d6e-9358-8b6925156cef 00:32:03.643 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:03.902 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:03.902 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 717afe87-5800-4d6e-9358-8b6925156cef 00:32:03.902 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:03.902 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:03.902 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:04.161 [2024-12-10 14:34:04.767172] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:04.162 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 717afe87-5800-4d6e-9358-8b6925156cef 00:32:04.162 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:32:04.162 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 717afe87-5800-4d6e-9358-8b6925156cef 00:32:04.162 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:04.162 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:04.162 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:04.162 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:04.162 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:04.162 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:04.162 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:04.162 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:04.162 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 717afe87-5800-4d6e-9358-8b6925156cef 00:32:04.422 request: 00:32:04.422 { 00:32:04.422 "uuid": "717afe87-5800-4d6e-9358-8b6925156cef", 00:32:04.422 "method": "bdev_lvol_get_lvstores", 00:32:04.422 "req_id": 1 00:32:04.422 } 00:32:04.422 Got JSON-RPC error response 00:32:04.422 response: 00:32:04.422 { 00:32:04.422 "code": -19, 00:32:04.422 "message": "No such device" 00:32:04.422 } 00:32:04.422 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:32:04.422 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:04.422 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:04.422 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:04.422 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:04.683 aio_bdev 00:32:04.683 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7fa3b09c-7e0a-44c4-a982-9baa0e258f8f 00:32:04.683 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7fa3b09c-7e0a-44c4-a982-9baa0e258f8f 00:32:04.683 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:04.683 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:04.683 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:04.683 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:04.683 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:04.683 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7fa3b09c-7e0a-44c4-a982-9baa0e258f8f -t 2000 00:32:04.941 [ 00:32:04.942 { 00:32:04.942 "name": "7fa3b09c-7e0a-44c4-a982-9baa0e258f8f", 00:32:04.942 "aliases": [ 00:32:04.942 "lvs/lvol" 00:32:04.942 ], 00:32:04.942 "product_name": "Logical Volume", 00:32:04.942 "block_size": 4096, 00:32:04.942 "num_blocks": 38912, 00:32:04.942 "uuid": "7fa3b09c-7e0a-44c4-a982-9baa0e258f8f", 00:32:04.942 "assigned_rate_limits": { 00:32:04.942 "rw_ios_per_sec": 0, 00:32:04.942 "rw_mbytes_per_sec": 0, 00:32:04.942 "r_mbytes_per_sec": 0, 00:32:04.942 "w_mbytes_per_sec": 0 00:32:04.942 }, 00:32:04.942 "claimed": false, 00:32:04.942 "zoned": false, 00:32:04.942 "supported_io_types": { 00:32:04.942 "read": true, 00:32:04.942 "write": true, 00:32:04.942 "unmap": true, 00:32:04.942 "flush": false, 00:32:04.942 "reset": true, 00:32:04.942 "nvme_admin": false, 00:32:04.942 "nvme_io": false, 00:32:04.942 "nvme_io_md": false, 00:32:04.942 "write_zeroes": true, 00:32:04.942 "zcopy": false, 00:32:04.942 "get_zone_info": false, 00:32:04.942 "zone_management": false, 00:32:04.942 "zone_append": false, 00:32:04.942 "compare": false, 00:32:04.942 "compare_and_write": false, 00:32:04.942 "abort": false, 00:32:04.942 "seek_hole": true, 00:32:04.942 "seek_data": true, 00:32:04.942 "copy": false, 00:32:04.942 "nvme_iov_md": false 00:32:04.942 }, 00:32:04.942 "driver_specific": { 00:32:04.942 "lvol": { 00:32:04.942 "lvol_store_uuid": "717afe87-5800-4d6e-9358-8b6925156cef", 00:32:04.942 "base_bdev": "aio_bdev", 00:32:04.942 "thin_provision": false, 00:32:04.942 "num_allocated_clusters": 38, 00:32:04.942 "snapshot": false, 00:32:04.942 "clone": false, 00:32:04.942 "esnap_clone": false 00:32:04.942 } 00:32:04.942 } 00:32:04.942 } 00:32:04.942 ] 00:32:04.942 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:04.942 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:04.942 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 717afe87-5800-4d6e-9358-8b6925156cef 00:32:05.200 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:05.200 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 717afe87-5800-4d6e-9358-8b6925156cef 00:32:05.200 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:05.459 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:05.459 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7fa3b09c-7e0a-44c4-a982-9baa0e258f8f 00:32:05.459 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 717afe87-5800-4d6e-9358-8b6925156cef 00:32:05.717 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:05.976 00:32:05.976 real 0m17.530s 00:32:05.976 user 0m34.753s 00:32:05.976 sys 0m3.585s 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:05.976 ************************************ 00:32:05.976 END TEST lvs_grow_dirty 00:32:05.976 ************************************ 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:05.976 nvmf_trace.0 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:05.976 rmmod nvme_tcp 00:32:05.976 rmmod nvme_fabrics 00:32:05.976 rmmod nvme_keyring 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1857531 ']' 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1857531 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1857531 ']' 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1857531 00:32:05.976 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1857531 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1857531' 00:32:06.239 killing process with pid 1857531 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1857531 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1857531 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:06.239 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:08.776 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:08.776 00:32:08.776 real 0m43.302s 00:32:08.776 user 0m52.678s 00:32:08.776 sys 0m10.548s 00:32:08.776 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:08.776 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:08.776 ************************************ 00:32:08.776 END TEST nvmf_lvs_grow 00:32:08.776 ************************************ 00:32:08.776 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:08.776 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:08.776 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:08.776 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:08.776 ************************************ 00:32:08.776 START TEST nvmf_bdev_io_wait 00:32:08.776 ************************************ 00:32:08.776 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:08.776 * Looking for test storage... 00:32:08.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:08.776 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:08.776 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:08.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.777 --rc genhtml_branch_coverage=1 00:32:08.777 --rc genhtml_function_coverage=1 00:32:08.777 --rc genhtml_legend=1 00:32:08.777 --rc geninfo_all_blocks=1 00:32:08.777 --rc geninfo_unexecuted_blocks=1 00:32:08.777 00:32:08.777 ' 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:08.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.777 --rc genhtml_branch_coverage=1 00:32:08.777 --rc genhtml_function_coverage=1 00:32:08.777 --rc genhtml_legend=1 00:32:08.777 --rc geninfo_all_blocks=1 00:32:08.777 --rc geninfo_unexecuted_blocks=1 00:32:08.777 00:32:08.777 ' 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:08.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.777 --rc genhtml_branch_coverage=1 00:32:08.777 --rc genhtml_function_coverage=1 00:32:08.777 --rc genhtml_legend=1 00:32:08.777 --rc geninfo_all_blocks=1 00:32:08.777 --rc geninfo_unexecuted_blocks=1 00:32:08.777 00:32:08.777 ' 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:08.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.777 --rc genhtml_branch_coverage=1 00:32:08.777 --rc genhtml_function_coverage=1 00:32:08.777 --rc genhtml_legend=1 00:32:08.777 --rc geninfo_all_blocks=1 00:32:08.777 --rc geninfo_unexecuted_blocks=1 00:32:08.777 00:32:08.777 ' 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:08.777 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:08.778 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:08.778 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:08.778 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:08.778 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:08.778 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:08.778 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:08.778 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:08.778 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:08.778 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.778 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:08.778 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:08.778 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:08.778 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:08.778 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:08.778 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:15.352 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:15.352 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:15.352 Found net devices under 0000:af:00.0: cvl_0_0 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:15.352 Found net devices under 0000:af:00.1: cvl_0_1 00:32:15.352 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:15.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:15.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:32:15.353 00:32:15.353 --- 10.0.0.2 ping statistics --- 00:32:15.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.353 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:15.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:15.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:32:15.353 00:32:15.353 --- 10.0.0.1 ping statistics --- 00:32:15.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.353 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:15.353 14:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:15.353 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:15.353 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:15.353 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:15.353 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:15.353 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1862189 00:32:15.353 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:15.353 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1862189 00:32:15.353 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1862189 ']' 00:32:15.353 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.353 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:15.353 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.353 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:15.353 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:15.353 [2024-12-10 14:34:16.056459] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:15.353 [2024-12-10 14:34:16.057361] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:32:15.353 [2024-12-10 14:34:16.057395] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:15.613 [2024-12-10 14:34:16.142793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:15.613 [2024-12-10 14:34:16.183309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:15.613 [2024-12-10 14:34:16.183347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:15.613 [2024-12-10 14:34:16.183354] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:15.613 [2024-12-10 14:34:16.183360] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:15.613 [2024-12-10 14:34:16.183365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:15.613 [2024-12-10 14:34:16.184833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.613 [2024-12-10 14:34:16.184936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:15.613 [2024-12-10 14:34:16.185040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.613 [2024-12-10 14:34:16.185040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:15.613 [2024-12-10 14:34:16.185412] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:15.613 [2024-12-10 14:34:16.317113] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:15.613 [2024-12-10 14:34:16.317355] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:15.613 [2024-12-10 14:34:16.317847] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:15.613 [2024-12-10 14:34:16.317900] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:15.613 [2024-12-10 14:34:16.329818] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.613 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:15.873 Malloc0 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:15.873 [2024-12-10 14:34:16.397865] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1862285 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1862287 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:15.873 { 00:32:15.873 "params": { 00:32:15.873 "name": "Nvme$subsystem", 00:32:15.873 "trtype": "$TEST_TRANSPORT", 00:32:15.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:15.873 "adrfam": "ipv4", 00:32:15.873 "trsvcid": "$NVMF_PORT", 00:32:15.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:15.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:15.873 "hdgst": ${hdgst:-false}, 00:32:15.873 "ddgst": ${ddgst:-false} 00:32:15.873 }, 00:32:15.873 "method": "bdev_nvme_attach_controller" 00:32:15.873 } 00:32:15.873 EOF 00:32:15.873 )") 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1862289 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:15.873 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:15.874 { 00:32:15.874 "params": { 00:32:15.874 "name": "Nvme$subsystem", 00:32:15.874 "trtype": "$TEST_TRANSPORT", 00:32:15.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:15.874 "adrfam": "ipv4", 00:32:15.874 "trsvcid": "$NVMF_PORT", 00:32:15.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:15.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:15.874 "hdgst": ${hdgst:-false}, 00:32:15.874 "ddgst": ${ddgst:-false} 00:32:15.874 }, 00:32:15.874 "method": "bdev_nvme_attach_controller" 00:32:15.874 } 00:32:15.874 EOF 00:32:15.874 )") 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1862292 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:15.874 { 00:32:15.874 "params": { 00:32:15.874 "name": "Nvme$subsystem", 00:32:15.874 "trtype": "$TEST_TRANSPORT", 00:32:15.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:15.874 "adrfam": "ipv4", 00:32:15.874 "trsvcid": "$NVMF_PORT", 00:32:15.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:15.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:15.874 "hdgst": ${hdgst:-false}, 00:32:15.874 "ddgst": ${ddgst:-false} 00:32:15.874 }, 00:32:15.874 "method": "bdev_nvme_attach_controller" 00:32:15.874 } 00:32:15.874 EOF 00:32:15.874 )") 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:15.874 { 00:32:15.874 "params": { 00:32:15.874 "name": "Nvme$subsystem", 00:32:15.874 "trtype": "$TEST_TRANSPORT", 00:32:15.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:15.874 "adrfam": "ipv4", 00:32:15.874 "trsvcid": "$NVMF_PORT", 00:32:15.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:15.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:15.874 "hdgst": ${hdgst:-false}, 00:32:15.874 "ddgst": ${ddgst:-false} 00:32:15.874 }, 00:32:15.874 "method": "bdev_nvme_attach_controller" 00:32:15.874 } 00:32:15.874 EOF 00:32:15.874 )") 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1862285 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:15.874 "params": { 00:32:15.874 "name": "Nvme1", 00:32:15.874 "trtype": "tcp", 00:32:15.874 "traddr": "10.0.0.2", 00:32:15.874 "adrfam": "ipv4", 00:32:15.874 "trsvcid": "4420", 00:32:15.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:15.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:15.874 "hdgst": false, 00:32:15.874 "ddgst": false 00:32:15.874 }, 00:32:15.874 "method": "bdev_nvme_attach_controller" 00:32:15.874 }' 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:15.874 "params": { 00:32:15.874 "name": "Nvme1", 00:32:15.874 "trtype": "tcp", 00:32:15.874 "traddr": "10.0.0.2", 00:32:15.874 "adrfam": "ipv4", 00:32:15.874 "trsvcid": "4420", 00:32:15.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:15.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:15.874 "hdgst": false, 00:32:15.874 "ddgst": false 00:32:15.874 }, 00:32:15.874 "method": "bdev_nvme_attach_controller" 00:32:15.874 }' 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:15.874 "params": { 00:32:15.874 "name": "Nvme1", 00:32:15.874 "trtype": "tcp", 00:32:15.874 "traddr": "10.0.0.2", 00:32:15.874 "adrfam": "ipv4", 00:32:15.874 "trsvcid": "4420", 00:32:15.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:15.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:15.874 "hdgst": false, 00:32:15.874 "ddgst": false 00:32:15.874 }, 00:32:15.874 "method": "bdev_nvme_attach_controller" 00:32:15.874 }' 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:15.874 14:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:15.874 "params": { 00:32:15.874 "name": "Nvme1", 00:32:15.874 "trtype": "tcp", 00:32:15.874 "traddr": "10.0.0.2", 00:32:15.874 "adrfam": "ipv4", 00:32:15.874 "trsvcid": "4420", 00:32:15.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:15.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:15.874 "hdgst": false, 00:32:15.874 "ddgst": false 00:32:15.874 }, 00:32:15.874 "method": "bdev_nvme_attach_controller" 00:32:15.874 }' 00:32:15.874 [2024-12-10 14:34:16.447839] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:32:15.874 [2024-12-10 14:34:16.447888] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:15.874 [2024-12-10 14:34:16.448691] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:32:15.875 [2024-12-10 14:34:16.448738] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:15.875 [2024-12-10 14:34:16.453031] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:32:15.875 [2024-12-10 14:34:16.453074] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:15.875 [2024-12-10 14:34:16.455176] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:32:15.875 [2024-12-10 14:34:16.455225] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:16.133 [2024-12-10 14:34:16.645389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.133 [2024-12-10 14:34:16.690259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:16.133 [2024-12-10 14:34:16.743683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.133 [2024-12-10 14:34:16.785612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.133 [2024-12-10 14:34:16.798107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:16.133 [2024-12-10 14:34:16.828777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:16.133 [2024-12-10 14:34:16.858272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.392 [2024-12-10 14:34:16.898420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:16.392 Running I/O for 1 seconds... 00:32:16.392 Running I/O for 1 seconds... 00:32:16.392 Running I/O for 1 seconds... 00:32:16.392 Running I/O for 1 seconds... 00:32:17.327 8459.00 IOPS, 33.04 MiB/s 00:32:17.327 Latency(us) 00:32:17.327 [2024-12-10T13:34:18.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.327 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:17.327 Nvme1n1 : 1.02 8456.41 33.03 0.00 0.00 15029.96 3308.01 23592.96 00:32:17.327 [2024-12-10T13:34:18.067Z] =================================================================================================================== 00:32:17.327 [2024-12-10T13:34:18.067Z] Total : 8456.41 33.03 0.00 0.00 15029.96 3308.01 23592.96 00:32:17.327 12038.00 IOPS, 47.02 MiB/s 00:32:17.327 Latency(us) 00:32:17.327 [2024-12-10T13:34:18.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.327 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:17.327 Nvme1n1 : 1.01 12101.43 47.27 0.00 0.00 10543.43 1474.56 15166.90 00:32:17.327 [2024-12-10T13:34:18.067Z] =================================================================================================================== 00:32:17.327 [2024-12-10T13:34:18.067Z] Total : 12101.43 47.27 0.00 0.00 10543.43 1474.56 15166.90 00:32:17.327 8136.00 IOPS, 31.78 MiB/s 00:32:17.327 Latency(us) 00:32:17.327 [2024-12-10T13:34:18.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.328 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:17.328 Nvme1n1 : 1.01 8276.94 32.33 0.00 0.00 15435.95 2683.86 30708.30 00:32:17.328 [2024-12-10T13:34:18.068Z] =================================================================================================================== 00:32:17.328 [2024-12-10T13:34:18.068Z] Total : 8276.94 32.33 0.00 0.00 15435.95 2683.86 30708.30 00:32:17.587 243392.00 IOPS, 950.75 MiB/s 00:32:17.587 Latency(us) 00:32:17.587 [2024-12-10T13:34:18.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.587 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:17.587 Nvme1n1 : 1.00 243024.04 949.31 0.00 0.00 523.88 230.16 1513.57 00:32:17.587 [2024-12-10T13:34:18.327Z] =================================================================================================================== 00:32:17.587 [2024-12-10T13:34:18.327Z] Total : 243024.04 949.31 0.00 0.00 523.88 230.16 1513.57 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1862287 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1862289 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1862292 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:17.587 rmmod nvme_tcp 00:32:17.587 rmmod nvme_fabrics 00:32:17.587 rmmod nvme_keyring 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1862189 ']' 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1862189 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1862189 ']' 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1862189 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:17.587 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1862189 00:32:17.846 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:17.846 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:17.846 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1862189' 00:32:17.846 killing process with pid 1862189 00:32:17.846 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1862189 00:32:17.846 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1862189 00:32:17.846 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:17.846 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:17.846 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:17.846 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:17.846 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:17.846 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:17.846 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:17.846 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:17.846 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:17.846 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.846 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:17.846 14:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:20.384 00:32:20.384 real 0m11.475s 00:32:20.384 user 0m15.012s 00:32:20.384 sys 0m7.006s 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:20.384 ************************************ 00:32:20.384 END TEST nvmf_bdev_io_wait 00:32:20.384 ************************************ 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:20.384 ************************************ 00:32:20.384 START TEST nvmf_queue_depth 00:32:20.384 ************************************ 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:20.384 * Looking for test storage... 00:32:20.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:20.384 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:20.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.385 --rc genhtml_branch_coverage=1 00:32:20.385 --rc genhtml_function_coverage=1 00:32:20.385 --rc genhtml_legend=1 00:32:20.385 --rc geninfo_all_blocks=1 00:32:20.385 --rc geninfo_unexecuted_blocks=1 00:32:20.385 00:32:20.385 ' 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:20.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.385 --rc genhtml_branch_coverage=1 00:32:20.385 --rc genhtml_function_coverage=1 00:32:20.385 --rc genhtml_legend=1 00:32:20.385 --rc geninfo_all_blocks=1 00:32:20.385 --rc geninfo_unexecuted_blocks=1 00:32:20.385 00:32:20.385 ' 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:20.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.385 --rc genhtml_branch_coverage=1 00:32:20.385 --rc genhtml_function_coverage=1 00:32:20.385 --rc genhtml_legend=1 00:32:20.385 --rc geninfo_all_blocks=1 00:32:20.385 --rc geninfo_unexecuted_blocks=1 00:32:20.385 00:32:20.385 ' 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:20.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.385 --rc genhtml_branch_coverage=1 00:32:20.385 --rc genhtml_function_coverage=1 00:32:20.385 --rc genhtml_legend=1 00:32:20.385 --rc geninfo_all_blocks=1 00:32:20.385 --rc geninfo_unexecuted_blocks=1 00:32:20.385 00:32:20.385 ' 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:20.385 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:20.386 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:20.386 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.386 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.386 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.386 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:20.386 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:20.386 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:20.386 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:26.956 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:26.956 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:26.956 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:26.956 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:26.956 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:26.956 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:26.956 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:26.956 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:26.956 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:26.956 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:26.956 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:26.956 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:26.956 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:26.956 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:26.957 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:26.957 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:26.957 Found net devices under 0000:af:00.0: cvl_0_0 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:26.957 Found net devices under 0000:af:00.1: cvl_0_1 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:26.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:26.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:32:26.957 00:32:26.957 --- 10.0.0.2 ping statistics --- 00:32:26.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.957 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:26.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:26.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:32:26.957 00:32:26.957 --- 10.0.0.1 ping statistics --- 00:32:26.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.957 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:26.957 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:26.958 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:26.958 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:26.958 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:26.958 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:26.958 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:26.958 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:26.958 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:26.958 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1866489 00:32:26.958 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1866489 00:32:26.958 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:26.958 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1866489 ']' 00:32:26.958 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.958 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:26.958 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.958 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:26.958 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.216 [2024-12-10 14:34:27.716407] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:27.216 [2024-12-10 14:34:27.717307] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:32:27.217 [2024-12-10 14:34:27.717340] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:27.217 [2024-12-10 14:34:27.800281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.217 [2024-12-10 14:34:27.839207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.217 [2024-12-10 14:34:27.839246] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.217 [2024-12-10 14:34:27.839253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:27.217 [2024-12-10 14:34:27.839259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:27.217 [2024-12-10 14:34:27.839264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.217 [2024-12-10 14:34:27.839781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.217 [2024-12-10 14:34:27.905598] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:27.217 [2024-12-10 14:34:27.905790] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:27.217 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:27.217 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:27.217 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:27.217 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:27.217 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.476 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:27.476 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:27.476 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.476 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.476 [2024-12-10 14:34:27.972470] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.476 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.476 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:27.476 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.476 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.476 Malloc0 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.476 [2024-12-10 14:34:28.048456] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1866556 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1866556 /var/tmp/bdevperf.sock 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1866556 ']' 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:27.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:27.476 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.476 [2024-12-10 14:34:28.096809] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:32:27.476 [2024-12-10 14:34:28.096850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1866556 ] 00:32:27.477 [2024-12-10 14:34:28.174444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.735 [2024-12-10 14:34:28.215403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.735 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:27.735 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:27.735 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:27.735 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.735 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.735 NVMe0n1 00:32:27.735 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.736 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:27.736 Running I/O for 10 seconds... 00:32:30.048 12062.00 IOPS, 47.12 MiB/s [2024-12-10T13:34:31.725Z] 12281.50 IOPS, 47.97 MiB/s [2024-12-10T13:34:32.662Z] 12288.00 IOPS, 48.00 MiB/s [2024-12-10T13:34:33.599Z] 12365.00 IOPS, 48.30 MiB/s [2024-12-10T13:34:34.536Z] 12434.00 IOPS, 48.57 MiB/s [2024-12-10T13:34:35.487Z] 12459.83 IOPS, 48.67 MiB/s [2024-12-10T13:34:36.863Z] 12506.43 IOPS, 48.85 MiB/s [2024-12-10T13:34:37.800Z] 12526.38 IOPS, 48.93 MiB/s [2024-12-10T13:34:38.737Z] 12554.00 IOPS, 49.04 MiB/s [2024-12-10T13:34:38.737Z] 12575.80 IOPS, 49.12 MiB/s 00:32:37.997 Latency(us) 00:32:37.997 [2024-12-10T13:34:38.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.997 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:37.997 Verification LBA range: start 0x0 length 0x4000 00:32:37.997 NVMe0n1 : 10.11 12540.56 48.99 0.00 0.00 81072.47 19099.06 69405.74 00:32:37.997 [2024-12-10T13:34:38.737Z] =================================================================================================================== 00:32:37.997 [2024-12-10T13:34:38.737Z] Total : 12540.56 48.99 0.00 0.00 81072.47 19099.06 69405.74 00:32:37.997 { 00:32:37.997 "results": [ 00:32:37.997 { 00:32:37.997 "job": "NVMe0n1", 00:32:37.997 "core_mask": "0x1", 00:32:37.997 "workload": "verify", 00:32:37.997 "status": "finished", 00:32:37.997 "verify_range": { 00:32:37.997 "start": 0, 00:32:37.997 "length": 16384 00:32:37.997 }, 00:32:37.997 "queue_depth": 1024, 00:32:37.997 "io_size": 4096, 00:32:37.997 "runtime": 10.108082, 00:32:37.997 "iops": 12540.559128823847, 00:32:37.997 "mibps": 48.98655909696815, 00:32:37.997 "io_failed": 0, 00:32:37.997 "io_timeout": 0, 00:32:37.997 "avg_latency_us": 81072.46501765413, 00:32:37.997 "min_latency_us": 19099.062857142857, 00:32:37.997 "max_latency_us": 69405.74476190476 00:32:37.997 } 00:32:37.997 ], 00:32:37.997 "core_count": 1 00:32:37.997 } 00:32:37.997 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1866556 00:32:37.997 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1866556 ']' 00:32:37.997 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1866556 00:32:37.997 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:37.997 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:37.997 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1866556 00:32:37.997 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:37.997 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:37.997 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1866556' 00:32:37.997 killing process with pid 1866556 00:32:37.997 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1866556 00:32:37.997 Received shutdown signal, test time was about 10.000000 seconds 00:32:37.997 00:32:37.997 Latency(us) 00:32:37.997 [2024-12-10T13:34:38.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.997 [2024-12-10T13:34:38.737Z] =================================================================================================================== 00:32:37.997 [2024-12-10T13:34:38.737Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:37.997 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1866556 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:38.257 rmmod nvme_tcp 00:32:38.257 rmmod nvme_fabrics 00:32:38.257 rmmod nvme_keyring 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1866489 ']' 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1866489 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1866489 ']' 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1866489 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1866489 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1866489' 00:32:38.257 killing process with pid 1866489 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1866489 00:32:38.257 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1866489 00:32:38.517 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:38.517 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:38.517 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:38.517 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:38.517 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:32:38.517 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:38.517 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:32:38.517 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:38.517 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:38.517 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.517 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.517 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:41.053 00:32:41.053 real 0m20.554s 00:32:41.053 user 0m22.987s 00:32:41.053 sys 0m6.757s 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:41.053 ************************************ 00:32:41.053 END TEST nvmf_queue_depth 00:32:41.053 ************************************ 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:41.053 ************************************ 00:32:41.053 START TEST nvmf_target_multipath 00:32:41.053 ************************************ 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:41.053 * Looking for test storage... 00:32:41.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:41.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.053 --rc genhtml_branch_coverage=1 00:32:41.053 --rc genhtml_function_coverage=1 00:32:41.053 --rc genhtml_legend=1 00:32:41.053 --rc geninfo_all_blocks=1 00:32:41.053 --rc geninfo_unexecuted_blocks=1 00:32:41.053 00:32:41.053 ' 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:41.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.053 --rc genhtml_branch_coverage=1 00:32:41.053 --rc genhtml_function_coverage=1 00:32:41.053 --rc genhtml_legend=1 00:32:41.053 --rc geninfo_all_blocks=1 00:32:41.053 --rc geninfo_unexecuted_blocks=1 00:32:41.053 00:32:41.053 ' 00:32:41.053 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:41.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.053 --rc genhtml_branch_coverage=1 00:32:41.053 --rc genhtml_function_coverage=1 00:32:41.054 --rc genhtml_legend=1 00:32:41.054 --rc geninfo_all_blocks=1 00:32:41.054 --rc geninfo_unexecuted_blocks=1 00:32:41.054 00:32:41.054 ' 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:41.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.054 --rc genhtml_branch_coverage=1 00:32:41.054 --rc genhtml_function_coverage=1 00:32:41.054 --rc genhtml_legend=1 00:32:41.054 --rc geninfo_all_blocks=1 00:32:41.054 --rc geninfo_unexecuted_blocks=1 00:32:41.054 00:32:41.054 ' 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:41.054 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:47.742 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:47.742 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:47.742 Found net devices under 0000:af:00.0: cvl_0_0 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:47.742 Found net devices under 0000:af:00.1: cvl_0_1 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:47.742 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:47.743 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:47.743 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:47.743 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:47.743 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:47.743 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:47.743 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:47.743 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:47.743 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:47.743 14:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:47.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:47.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:32:47.743 00:32:47.743 --- 10.0.0.2 ping statistics --- 00:32:47.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.743 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:47.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:47.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:32:47.743 00:32:47.743 --- 10.0.0.1 ping statistics --- 00:32:47.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.743 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:47.743 only one NIC for nvmf test 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:47.743 rmmod nvme_tcp 00:32:47.743 rmmod nvme_fabrics 00:32:47.743 rmmod nvme_keyring 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.743 14:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:49.649 00:32:49.649 real 0m9.050s 00:32:49.649 user 0m2.031s 00:32:49.649 sys 0m5.044s 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:49.649 ************************************ 00:32:49.649 END TEST nvmf_target_multipath 00:32:49.649 ************************************ 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:49.649 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:49.909 ************************************ 00:32:49.909 START TEST nvmf_zcopy 00:32:49.909 ************************************ 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:49.909 * Looking for test storage... 00:32:49.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:49.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.909 --rc genhtml_branch_coverage=1 00:32:49.909 --rc genhtml_function_coverage=1 00:32:49.909 --rc genhtml_legend=1 00:32:49.909 --rc geninfo_all_blocks=1 00:32:49.909 --rc geninfo_unexecuted_blocks=1 00:32:49.909 00:32:49.909 ' 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:49.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.909 --rc genhtml_branch_coverage=1 00:32:49.909 --rc genhtml_function_coverage=1 00:32:49.909 --rc genhtml_legend=1 00:32:49.909 --rc geninfo_all_blocks=1 00:32:49.909 --rc geninfo_unexecuted_blocks=1 00:32:49.909 00:32:49.909 ' 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:49.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.909 --rc genhtml_branch_coverage=1 00:32:49.909 --rc genhtml_function_coverage=1 00:32:49.909 --rc genhtml_legend=1 00:32:49.909 --rc geninfo_all_blocks=1 00:32:49.909 --rc geninfo_unexecuted_blocks=1 00:32:49.909 00:32:49.909 ' 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:49.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.909 --rc genhtml_branch_coverage=1 00:32:49.909 --rc genhtml_function_coverage=1 00:32:49.909 --rc genhtml_legend=1 00:32:49.909 --rc geninfo_all_blocks=1 00:32:49.909 --rc geninfo_unexecuted_blocks=1 00:32:49.909 00:32:49.909 ' 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.909 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:49.910 14:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:56.481 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.481 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:56.482 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:56.482 Found net devices under 0000:af:00.0: cvl_0_0 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:56.482 Found net devices under 0000:af:00.1: cvl_0_1 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:56.482 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:56.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:56.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:32:56.741 00:32:56.741 --- 10.0.0.2 ping statistics --- 00:32:56.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.741 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:56.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:56.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:32:56.741 00:32:56.741 --- 10.0.0.1 ping statistics --- 00:32:56.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.741 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1875928 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:56.741 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1875928 00:32:56.742 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1875928 ']' 00:32:56.742 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.742 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:56.742 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.742 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:56.742 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:56.742 [2024-12-10 14:34:57.373586] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:56.742 [2024-12-10 14:34:57.374489] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:32:56.742 [2024-12-10 14:34:57.374524] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:56.742 [2024-12-10 14:34:57.459286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.001 [2024-12-10 14:34:57.499324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:57.001 [2024-12-10 14:34:57.499356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:57.001 [2024-12-10 14:34:57.499363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:57.001 [2024-12-10 14:34:57.499369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:57.001 [2024-12-10 14:34:57.499374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:57.001 [2024-12-10 14:34:57.499885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.001 [2024-12-10 14:34:57.567329] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:57.001 [2024-12-10 14:34:57.567521] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.569 [2024-12-10 14:34:58.260634] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.569 [2024-12-10 14:34:58.288829] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.569 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.829 malloc0 00:32:57.829 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.829 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:57.829 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.829 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.829 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.829 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:57.829 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:57.829 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:57.829 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:57.829 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:57.829 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:57.829 { 00:32:57.829 "params": { 00:32:57.829 "name": "Nvme$subsystem", 00:32:57.829 "trtype": "$TEST_TRANSPORT", 00:32:57.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:57.829 "adrfam": "ipv4", 00:32:57.829 "trsvcid": "$NVMF_PORT", 00:32:57.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:57.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:57.829 "hdgst": ${hdgst:-false}, 00:32:57.829 "ddgst": ${ddgst:-false} 00:32:57.829 }, 00:32:57.829 "method": "bdev_nvme_attach_controller" 00:32:57.829 } 00:32:57.829 EOF 00:32:57.829 )") 00:32:57.829 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:57.829 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:57.829 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:57.829 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:57.829 "params": { 00:32:57.829 "name": "Nvme1", 00:32:57.829 "trtype": "tcp", 00:32:57.829 "traddr": "10.0.0.2", 00:32:57.829 "adrfam": "ipv4", 00:32:57.829 "trsvcid": "4420", 00:32:57.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:57.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:57.829 "hdgst": false, 00:32:57.829 "ddgst": false 00:32:57.829 }, 00:32:57.829 "method": "bdev_nvme_attach_controller" 00:32:57.829 }' 00:32:57.829 [2024-12-10 14:34:58.385816] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:32:57.829 [2024-12-10 14:34:58.385863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1876171 ] 00:32:57.829 [2024-12-10 14:34:58.464103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.829 [2024-12-10 14:34:58.503186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.088 Running I/O for 10 seconds... 00:33:00.401 8425.00 IOPS, 65.82 MiB/s [2024-12-10T13:35:01.709Z] 8519.00 IOPS, 66.55 MiB/s [2024-12-10T13:35:03.086Z] 8549.00 IOPS, 66.79 MiB/s [2024-12-10T13:35:04.021Z] 8565.25 IOPS, 66.92 MiB/s [2024-12-10T13:35:04.957Z] 8588.60 IOPS, 67.10 MiB/s [2024-12-10T13:35:05.896Z] 8601.50 IOPS, 67.20 MiB/s [2024-12-10T13:35:06.831Z] 8620.14 IOPS, 67.34 MiB/s [2024-12-10T13:35:07.766Z] 8626.12 IOPS, 67.39 MiB/s [2024-12-10T13:35:09.142Z] 8626.33 IOPS, 67.39 MiB/s [2024-12-10T13:35:09.142Z] 8618.10 IOPS, 67.33 MiB/s 00:33:08.402 Latency(us) 00:33:08.402 [2024-12-10T13:35:09.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.402 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:08.402 Verification LBA range: start 0x0 length 0x1000 00:33:08.402 Nvme1n1 : 10.05 8585.05 67.07 0.00 0.00 14810.37 2200.14 42941.68 00:33:08.402 [2024-12-10T13:35:09.142Z] =================================================================================================================== 00:33:08.402 [2024-12-10T13:35:09.142Z] Total : 8585.05 67.07 0.00 0.00 14810.37 2200.14 42941.68 00:33:08.402 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1877751 00:33:08.402 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:08.402 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:08.402 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:08.402 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:08.402 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:08.402 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:08.402 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:08.402 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:08.402 { 00:33:08.402 "params": { 00:33:08.402 "name": "Nvme$subsystem", 00:33:08.402 "trtype": "$TEST_TRANSPORT", 00:33:08.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:08.402 "adrfam": "ipv4", 00:33:08.402 "trsvcid": "$NVMF_PORT", 00:33:08.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:08.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:08.402 "hdgst": ${hdgst:-false}, 00:33:08.402 "ddgst": ${ddgst:-false} 00:33:08.402 }, 00:33:08.402 "method": "bdev_nvme_attach_controller" 00:33:08.402 } 00:33:08.402 EOF 00:33:08.402 )") 00:33:08.402 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:08.402 [2024-12-10 14:35:08.932222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:08.932254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:08.402 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:08.402 [2024-12-10 14:35:08.940184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:08.940197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:08.402 "params": { 00:33:08.402 "name": "Nvme1", 00:33:08.402 "trtype": "tcp", 00:33:08.402 "traddr": "10.0.0.2", 00:33:08.402 "adrfam": "ipv4", 00:33:08.402 "trsvcid": "4420", 00:33:08.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:08.402 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:08.402 "hdgst": false, 00:33:08.402 "ddgst": false 00:33:08.402 }, 00:33:08.402 "method": "bdev_nvme_attach_controller" 00:33:08.402 }' 00:33:08.402 [2024-12-10 14:35:08.948180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:08.948197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 [2024-12-10 14:35:08.956179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:08.956189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 [2024-12-10 14:35:08.968183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:08.968192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 [2024-12-10 14:35:08.970457] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:33:08.402 [2024-12-10 14:35:08.970498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1877751 ] 00:33:08.402 [2024-12-10 14:35:08.980182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:08.980192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 [2024-12-10 14:35:08.992181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:08.992190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 [2024-12-10 14:35:09.004183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:09.004193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 [2024-12-10 14:35:09.016180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:09.016189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 [2024-12-10 14:35:09.028180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:09.028189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 [2024-12-10 14:35:09.040180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:09.040190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 [2024-12-10 14:35:09.047862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.402 [2024-12-10 14:35:09.052181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:09.052191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 [2024-12-10 14:35:09.064182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:09.064195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 [2024-12-10 14:35:09.076181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:09.076191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 [2024-12-10 14:35:09.087708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.402 [2024-12-10 14:35:09.088182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:09.088193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 [2024-12-10 14:35:09.100191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:09.100206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 [2024-12-10 14:35:09.112190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:09.112209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 [2024-12-10 14:35:09.124188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:09.124203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.402 [2024-12-10 14:35:09.136196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.402 [2024-12-10 14:35:09.136232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.148189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.148205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.160183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.160194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.172194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.172211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.184187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.184203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.196185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.196200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.208185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.208199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.220180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.220192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.232179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.232190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.244183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.244199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.256186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.256200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.268180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.268190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.280179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.280189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.292182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.292193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.304182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.304194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.316180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.316189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.328180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.328189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.340183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.340197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 [2024-12-10 14:35:09.352187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.661 [2024-12-10 14:35:09.352205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.661 Running I/O for 5 seconds... 00:33:08.662 [2024-12-10 14:35:09.369556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.662 [2024-12-10 14:35:09.369581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.662 [2024-12-10 14:35:09.384255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.662 [2024-12-10 14:35:09.384274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.662 [2024-12-10 14:35:09.395671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.662 [2024-12-10 14:35:09.395691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.920 [2024-12-10 14:35:09.409082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.920 [2024-12-10 14:35:09.409101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.920 [2024-12-10 14:35:09.423646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.920 [2024-12-10 14:35:09.423666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.920 [2024-12-10 14:35:09.436264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.920 [2024-12-10 14:35:09.436285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.920 [2024-12-10 14:35:09.448954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.920 [2024-12-10 14:35:09.448973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.920 [2024-12-10 14:35:09.463831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.920 [2024-12-10 14:35:09.463850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.921 [2024-12-10 14:35:09.477804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.921 [2024-12-10 14:35:09.477822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.921 [2024-12-10 14:35:09.492084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.921 [2024-12-10 14:35:09.492103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.921 [2024-12-10 14:35:09.503311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.921 [2024-12-10 14:35:09.503330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.921 [2024-12-10 14:35:09.516746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.921 [2024-12-10 14:35:09.516765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.921 [2024-12-10 14:35:09.531853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.921 [2024-12-10 14:35:09.531872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.921 [2024-12-10 14:35:09.543911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.921 [2024-12-10 14:35:09.543939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.921 [2024-12-10 14:35:09.556910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.921 [2024-12-10 14:35:09.556928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.921 [2024-12-10 14:35:09.571978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.921 [2024-12-10 14:35:09.571997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.921 [2024-12-10 14:35:09.585387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.921 [2024-12-10 14:35:09.585406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.921 [2024-12-10 14:35:09.599522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.921 [2024-12-10 14:35:09.599540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.921 [2024-12-10 14:35:09.613099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.921 [2024-12-10 14:35:09.613118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.921 [2024-12-10 14:35:09.627680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.921 [2024-12-10 14:35:09.627703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.921 [2024-12-10 14:35:09.640990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.921 [2024-12-10 14:35:09.641008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.921 [2024-12-10 14:35:09.655984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.921 [2024-12-10 14:35:09.656002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.669135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.669155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.683550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.683569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.697730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.697748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.712464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.712481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.727518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.727536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.741607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.741625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.755930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.755949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.766805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.766823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.781313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.781331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.795703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.795722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.808470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.808487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.823776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.823794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.836263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.836281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.849120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.849138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.863658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.863676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.876485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.876503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.891470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.891489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.180 [2024-12-10 14:35:09.905599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.180 [2024-12-10 14:35:09.905618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:09.920100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:09.920121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:09.930941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:09.930964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:09.945452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:09.945470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:09.960011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:09.960030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:09.974406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:09.974425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:09.988999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:09.989017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:10.003511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:10.003530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:10.017926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:10.017945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:10.032448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:10.032466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:10.044143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:10.044161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:10.058386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:10.058405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:10.072766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:10.072785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:10.087833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:10.087852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:10.101820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:10.101838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:10.116389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:10.116409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:10.128720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:10.128739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:10.144938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:10.144956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:10.160020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:10.160040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.439 [2024-12-10 14:35:10.174099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.439 [2024-12-10 14:35:10.174118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 [2024-12-10 14:35:10.188771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.188790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 [2024-12-10 14:35:10.204145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.204163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 [2024-12-10 14:35:10.218204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.218229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 [2024-12-10 14:35:10.232415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.232435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 [2024-12-10 14:35:10.242696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.242715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 [2024-12-10 14:35:10.256978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.256997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 [2024-12-10 14:35:10.271620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.271638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 [2024-12-10 14:35:10.285762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.285780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 [2024-12-10 14:35:10.300134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.300152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 [2024-12-10 14:35:10.313628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.313647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 [2024-12-10 14:35:10.327981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.327999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 [2024-12-10 14:35:10.341855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.341873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 [2024-12-10 14:35:10.356093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.356111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 16997.00 IOPS, 132.79 MiB/s [2024-12-10T13:35:10.438Z] [2024-12-10 14:35:10.367021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.367039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 [2024-12-10 14:35:10.381123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.381141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 [2024-12-10 14:35:10.395441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.395459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 [2024-12-10 14:35:10.409589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.409613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.698 [2024-12-10 14:35:10.423969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.698 [2024-12-10 14:35:10.423989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.437527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.437546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.451962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.451981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.465732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.465750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.480407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.480425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.492303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.492322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.505933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.505951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.520839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.520856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.535695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.535714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.548583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.548600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.561797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.561816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.576363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.576381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.588791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.588809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.601781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.601799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.616538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.616556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.631858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.631877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.645839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.645858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.660037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.660056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.672767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.672792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.957 [2024-12-10 14:35:10.686113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.957 [2024-12-10 14:35:10.686132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.701030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.701049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.712974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.712992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.728084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.728103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.741163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.741181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.756507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.756525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.771779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.771798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.785837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.785856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.800863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.800881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.816100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.816118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.829165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.829183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.844286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.844305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.856274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.856292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.870005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.870025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.884250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.884269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.896675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.896697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.909513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.909533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.920530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.920548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.933475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.933501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.216 [2024-12-10 14:35:10.947772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.216 [2024-12-10 14:35:10.947790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.475 [2024-12-10 14:35:10.962188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.475 [2024-12-10 14:35:10.962207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.475 [2024-12-10 14:35:10.976720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.475 [2024-12-10 14:35:10.976738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.475 [2024-12-10 14:35:10.991116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.475 [2024-12-10 14:35:10.991134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.475 [2024-12-10 14:35:11.005317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.475 [2024-12-10 14:35:11.005337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.475 [2024-12-10 14:35:11.020021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.475 [2024-12-10 14:35:11.020040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.475 [2024-12-10 14:35:11.032974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.475 [2024-12-10 14:35:11.032993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.475 [2024-12-10 14:35:11.047971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.475 [2024-12-10 14:35:11.047989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.475 [2024-12-10 14:35:11.059105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.475 [2024-12-10 14:35:11.059124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.475 [2024-12-10 14:35:11.073490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.475 [2024-12-10 14:35:11.073508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.475 [2024-12-10 14:35:11.088074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.475 [2024-12-10 14:35:11.088093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.476 [2024-12-10 14:35:11.100862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.476 [2024-12-10 14:35:11.100880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.476 [2024-12-10 14:35:11.113767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.476 [2024-12-10 14:35:11.113785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.476 [2024-12-10 14:35:11.128283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.476 [2024-12-10 14:35:11.128302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.476 [2024-12-10 14:35:11.138570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.476 [2024-12-10 14:35:11.138588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.476 [2024-12-10 14:35:11.153082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.476 [2024-12-10 14:35:11.153101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.476 [2024-12-10 14:35:11.168650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.476 [2024-12-10 14:35:11.168668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.476 [2024-12-10 14:35:11.184355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.476 [2024-12-10 14:35:11.184373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.476 [2024-12-10 14:35:11.196810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.476 [2024-12-10 14:35:11.196831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.476 [2024-12-10 14:35:11.210221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.476 [2024-12-10 14:35:11.210239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.734 [2024-12-10 14:35:11.225375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.734 [2024-12-10 14:35:11.225393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.734 [2024-12-10 14:35:11.239537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.734 [2024-12-10 14:35:11.239555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.735 [2024-12-10 14:35:11.253943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.735 [2024-12-10 14:35:11.253962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.735 [2024-12-10 14:35:11.268500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.735 [2024-12-10 14:35:11.268519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.735 [2024-12-10 14:35:11.284471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.735 [2024-12-10 14:35:11.284490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.735 [2024-12-10 14:35:11.299714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.735 [2024-12-10 14:35:11.299732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.735 [2024-12-10 14:35:11.313889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.735 [2024-12-10 14:35:11.313907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.735 [2024-12-10 14:35:11.328491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.735 [2024-12-10 14:35:11.328509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.735 [2024-12-10 14:35:11.340992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.735 [2024-12-10 14:35:11.341009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.735 [2024-12-10 14:35:11.356886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.735 [2024-12-10 14:35:11.356904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.735 17039.00 IOPS, 133.12 MiB/s [2024-12-10T13:35:11.475Z] [2024-12-10 14:35:11.371917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.735 [2024-12-10 14:35:11.371934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.735 [2024-12-10 14:35:11.385921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.735 [2024-12-10 14:35:11.385939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.735 [2024-12-10 14:35:11.399980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.735 [2024-12-10 14:35:11.399999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.735 [2024-12-10 14:35:11.413095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.735 [2024-12-10 14:35:11.413115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.735 [2024-12-10 14:35:11.428431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.735 [2024-12-10 14:35:11.428449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.735 [2024-12-10 14:35:11.440481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.735 [2024-12-10 14:35:11.440498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.735 [2024-12-10 14:35:11.454210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.735 [2024-12-10 14:35:11.454234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.735 [2024-12-10 14:35:11.468703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.735 [2024-12-10 14:35:11.468721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.484396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.484415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.496406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.496424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.509691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.509710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.524125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.524142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.535054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.535073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.549857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.549875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.564305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.564323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.574356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.574374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.588680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.588698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.604317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.604335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.616263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.616281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.630059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.630077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.644808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.644826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.659896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.659914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.674088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.674107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.688702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.688720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.703730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.703750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.717454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.717473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.994 [2024-12-10 14:35:11.728612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.994 [2024-12-10 14:35:11.728629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.252 [2024-12-10 14:35:11.744098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.252 [2024-12-10 14:35:11.744117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.252 [2024-12-10 14:35:11.757951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.252 [2024-12-10 14:35:11.757970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.252 [2024-12-10 14:35:11.772261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.253 [2024-12-10 14:35:11.772279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.253 [2024-12-10 14:35:11.783456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.253 [2024-12-10 14:35:11.783474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.253 [2024-12-10 14:35:11.797940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.253 [2024-12-10 14:35:11.797959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.253 [2024-12-10 14:35:11.812532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.253 [2024-12-10 14:35:11.812550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.253 [2024-12-10 14:35:11.824924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.253 [2024-12-10 14:35:11.824941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.253 [2024-12-10 14:35:11.839904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.253 [2024-12-10 14:35:11.839922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.253 [2024-12-10 14:35:11.854033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.253 [2024-12-10 14:35:11.854051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.253 [2024-12-10 14:35:11.867990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.253 [2024-12-10 14:35:11.868008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.253 [2024-12-10 14:35:11.881441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.253 [2024-12-10 14:35:11.881459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.253 [2024-12-10 14:35:11.895888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.253 [2024-12-10 14:35:11.895908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.253 [2024-12-10 14:35:11.910157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.253 [2024-12-10 14:35:11.910175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.253 [2024-12-10 14:35:11.925116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.253 [2024-12-10 14:35:11.925134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.253 [2024-12-10 14:35:11.940639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.253 [2024-12-10 14:35:11.940657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.253 [2024-12-10 14:35:11.955782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.253 [2024-12-10 14:35:11.955800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.253 [2024-12-10 14:35:11.969712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.253 [2024-12-10 14:35:11.969731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.253 [2024-12-10 14:35:11.984564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.253 [2024-12-10 14:35:11.984582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.511 [2024-12-10 14:35:12.000028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.511 [2024-12-10 14:35:12.000047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.511 [2024-12-10 14:35:12.014056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.511 [2024-12-10 14:35:12.014074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.511 [2024-12-10 14:35:12.029064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.511 [2024-12-10 14:35:12.029082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.511 [2024-12-10 14:35:12.044194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.511 [2024-12-10 14:35:12.044212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.511 [2024-12-10 14:35:12.055615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.511 [2024-12-10 14:35:12.055633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.511 [2024-12-10 14:35:12.070197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.511 [2024-12-10 14:35:12.070222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.511 [2024-12-10 14:35:12.084994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.512 [2024-12-10 14:35:12.085013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.512 [2024-12-10 14:35:12.100322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.512 [2024-12-10 14:35:12.100341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.512 [2024-12-10 14:35:12.112717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.512 [2024-12-10 14:35:12.112735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.512 [2024-12-10 14:35:12.128659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.512 [2024-12-10 14:35:12.128678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.512 [2024-12-10 14:35:12.144545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.512 [2024-12-10 14:35:12.144564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.512 [2024-12-10 14:35:12.160406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.512 [2024-12-10 14:35:12.160425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.512 [2024-12-10 14:35:12.173861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.512 [2024-12-10 14:35:12.173880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.512 [2024-12-10 14:35:12.188563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.512 [2024-12-10 14:35:12.188581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.512 [2024-12-10 14:35:12.203769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.512 [2024-12-10 14:35:12.203787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.512 [2024-12-10 14:35:12.217727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.512 [2024-12-10 14:35:12.217746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.512 [2024-12-10 14:35:12.232711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.512 [2024-12-10 14:35:12.232729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.512 [2024-12-10 14:35:12.248452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.512 [2024-12-10 14:35:12.248471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 [2024-12-10 14:35:12.260058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.260082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 [2024-12-10 14:35:12.274131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.274152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 [2024-12-10 14:35:12.289008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.289028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 [2024-12-10 14:35:12.304049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.304068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 [2024-12-10 14:35:12.316964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.316983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 [2024-12-10 14:35:12.329923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.329941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 [2024-12-10 14:35:12.344937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.344955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 [2024-12-10 14:35:12.360173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.360191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 17003.33 IOPS, 132.84 MiB/s [2024-12-10T13:35:12.510Z] [2024-12-10 14:35:12.373845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.373863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 [2024-12-10 14:35:12.388680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.388698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 [2024-12-10 14:35:12.404147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.404165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 [2024-12-10 14:35:12.417083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.417101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 [2024-12-10 14:35:12.431700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.431719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 [2024-12-10 14:35:12.444787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.444805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 [2024-12-10 14:35:12.457749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.457768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 [2024-12-10 14:35:12.472090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.472109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 [2024-12-10 14:35:12.485728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.485747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.770 [2024-12-10 14:35:12.499911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.770 [2024-12-10 14:35:12.499930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.513945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.513964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.528241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.528264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.540956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.540975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.556393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.556411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.569057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.569075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.584690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.584709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.600551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.600571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.616536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.616554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.631948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.631967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.645758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.645776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.660264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.660282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.672543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.672561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.685870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.685888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.700065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.700084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.712594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.712612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.725732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.725750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.740126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.740144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.029 [2024-12-10 14:35:12.752513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.029 [2024-12-10 14:35:12.752531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:12.768694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:12.768714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:12.784050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:12.784069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:12.797072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:12.797099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:12.811550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:12.811568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:12.826323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:12.826342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:12.840503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:12.840521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:12.856126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:12.856144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:12.869411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:12.869429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:12.883631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:12.883653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:12.896828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:12.896846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:12.909851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:12.909869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:12.924245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:12.924263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:12.937853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:12.937870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:12.952191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:12.952209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:12.962413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:12.962431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:12.976782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:12.976799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:12.991563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:12.991581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:13.005605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:13.005624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.288 [2024-12-10 14:35:13.019719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.288 [2024-12-10 14:35:13.019736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.032906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.032924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.046109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.046127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.060703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.060721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.076201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.076224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.089176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.089194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.104115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.104135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.116738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.116756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.129784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.129802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.140472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.140490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.153928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.153947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.168203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.168227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.180705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.180722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.193688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.193706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.208460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.208478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.223776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.223794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.236955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.236973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.252462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.252480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.268118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.268138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.547 [2024-12-10 14:35:13.282348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.547 [2024-12-10 14:35:13.282366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.806 [2024-12-10 14:35:13.297004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.806 [2024-12-10 14:35:13.297022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.806 [2024-12-10 14:35:13.312024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.806 [2024-12-10 14:35:13.312042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.806 [2024-12-10 14:35:13.326405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.806 [2024-12-10 14:35:13.326424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.806 [2024-12-10 14:35:13.340809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.806 [2024-12-10 14:35:13.340828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.806 [2024-12-10 14:35:13.356356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.806 [2024-12-10 14:35:13.356375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.806 [2024-12-10 14:35:13.368968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.806 [2024-12-10 14:35:13.368986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.806 17048.25 IOPS, 133.19 MiB/s [2024-12-10T13:35:13.547Z] [2024-12-10 14:35:13.381830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.807 [2024-12-10 14:35:13.381848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.807 [2024-12-10 14:35:13.396357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.807 [2024-12-10 14:35:13.396376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.807 [2024-12-10 14:35:13.407367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.807 [2024-12-10 14:35:13.407385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.807 [2024-12-10 14:35:13.421698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.807 [2024-12-10 14:35:13.421716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.807 [2024-12-10 14:35:13.436316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.807 [2024-12-10 14:35:13.436335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.807 [2024-12-10 14:35:13.449080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.807 [2024-12-10 14:35:13.449098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.807 [2024-12-10 14:35:13.463751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.807 [2024-12-10 14:35:13.463769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.807 [2024-12-10 14:35:13.478248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.807 [2024-12-10 14:35:13.478266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.807 [2024-12-10 14:35:13.492208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.807 [2024-12-10 14:35:13.492232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.807 [2024-12-10 14:35:13.505251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.807 [2024-12-10 14:35:13.505270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.807 [2024-12-10 14:35:13.520144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.807 [2024-12-10 14:35:13.520163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.807 [2024-12-10 14:35:13.533549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.807 [2024-12-10 14:35:13.533569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.548407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.548428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.560758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.560778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.575548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.575571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.590054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.590073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.604638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.604656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.619584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.619602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.633153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.633171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.645510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.645528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.659783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.659801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.673177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.673195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.688328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.688346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.701159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.701177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.713524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.713542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.724501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.724519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.738332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.738352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.752618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.752636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.767518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.767537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.781830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.781848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.066 [2024-12-10 14:35:13.795997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.066 [2024-12-10 14:35:13.796017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:13.809591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:13.809610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:13.824252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:13.824282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:13.835117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:13.835140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:13.849674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:13.849692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:13.864328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:13.864349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:13.875611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:13.875629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:13.889776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:13.889794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:13.904289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:13.904307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:13.915156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:13.915176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:13.929919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:13.929937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:13.944523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:13.944540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:13.959778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:13.959797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:13.974188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:13.974206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:13.988882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:13.988899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:14.003380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:14.003399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:14.017858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:14.017877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:14.032027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:14.032045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:14.045570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:14.045588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.325 [2024-12-10 14:35:14.059747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.325 [2024-12-10 14:35:14.059766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.072971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.072990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.087868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.087887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.101532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.101554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.116354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.116373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.127678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.127697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.142267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.142286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.156829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.156847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.171527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.171544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.185609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.185628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.199981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.199999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.213757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.213774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.228247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.228266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.238986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.239003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.253649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.253667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.267719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.267737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.280545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.280562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.295769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.295787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.585 [2024-12-10 14:35:14.309346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.585 [2024-12-10 14:35:14.309364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.844 [2024-12-10 14:35:14.324298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.844 [2024-12-10 14:35:14.324317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.844 [2024-12-10 14:35:14.337784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.844 [2024-12-10 14:35:14.337803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.844 [2024-12-10 14:35:14.352519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.844 [2024-12-10 14:35:14.352542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.844 [2024-12-10 14:35:14.368405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.844 [2024-12-10 14:35:14.368428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.844 17055.40 IOPS, 133.25 MiB/s [2024-12-10T13:35:14.584Z] [2024-12-10 14:35:14.381082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.844 [2024-12-10 14:35:14.381099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.844 00:33:13.844 Latency(us) 00:33:13.844 [2024-12-10T13:35:14.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:13.844 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:13.844 Nvme1n1 : 5.01 17053.98 133.23 0.00 0.00 7498.06 1778.83 12670.29 00:33:13.844 [2024-12-10T13:35:14.584Z] =================================================================================================================== 00:33:13.844 [2024-12-10T13:35:14.584Z] Total : 17053.98 133.23 0.00 0.00 7498.06 1778.83 12670.29 00:33:13.844 [2024-12-10 14:35:14.392185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.844 [2024-12-10 14:35:14.392202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.844 [2024-12-10 14:35:14.404187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.844 [2024-12-10 14:35:14.404200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.844 [2024-12-10 14:35:14.416198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.844 [2024-12-10 14:35:14.416222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.844 [2024-12-10 14:35:14.428189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.844 [2024-12-10 14:35:14.428202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.844 [2024-12-10 14:35:14.440188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.845 [2024-12-10 14:35:14.440202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.845 [2024-12-10 14:35:14.452185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.845 [2024-12-10 14:35:14.452199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.845 [2024-12-10 14:35:14.464184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.845 [2024-12-10 14:35:14.464197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.845 [2024-12-10 14:35:14.476191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.845 [2024-12-10 14:35:14.476208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.845 [2024-12-10 14:35:14.488186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.845 [2024-12-10 14:35:14.488204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.845 [2024-12-10 14:35:14.500181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.845 [2024-12-10 14:35:14.500190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.845 [2024-12-10 14:35:14.512186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.845 [2024-12-10 14:35:14.512198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.845 [2024-12-10 14:35:14.524185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.845 [2024-12-10 14:35:14.524197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.845 [2024-12-10 14:35:14.536181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.845 [2024-12-10 14:35:14.536190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1877751) - No such process 00:33:13.845 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1877751 00:33:13.845 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:13.845 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.845 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:13.845 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.845 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:13.845 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.845 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:13.845 delay0 00:33:13.845 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.845 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:13.845 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.845 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:13.845 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.845 14:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:14.103 [2024-12-10 14:35:14.647651] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:22.219 Initializing NVMe Controllers 00:33:22.219 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:22.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:22.219 Initialization complete. Launching workers. 00:33:22.219 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5283 00:33:22.219 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5566, failed to submit 37 00:33:22.219 success 5421, unsuccessful 145, failed 0 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:22.219 rmmod nvme_tcp 00:33:22.219 rmmod nvme_fabrics 00:33:22.219 rmmod nvme_keyring 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1875928 ']' 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1875928 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1875928 ']' 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1875928 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1875928 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1875928' 00:33:22.219 killing process with pid 1875928 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1875928 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1875928 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.219 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.156 14:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:23.156 00:33:23.156 real 0m33.469s 00:33:23.156 user 0m41.656s 00:33:23.156 sys 0m13.621s 00:33:23.156 14:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:23.156 14:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:23.156 ************************************ 00:33:23.156 END TEST nvmf_zcopy 00:33:23.156 ************************************ 00:33:23.416 14:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:23.416 14:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:23.416 14:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:23.416 14:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:23.416 ************************************ 00:33:23.416 START TEST nvmf_nmic 00:33:23.416 ************************************ 00:33:23.416 14:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:23.416 * Looking for test storage... 00:33:23.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:23.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.416 --rc genhtml_branch_coverage=1 00:33:23.416 --rc genhtml_function_coverage=1 00:33:23.416 --rc genhtml_legend=1 00:33:23.416 --rc geninfo_all_blocks=1 00:33:23.416 --rc geninfo_unexecuted_blocks=1 00:33:23.416 00:33:23.416 ' 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:23.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.416 --rc genhtml_branch_coverage=1 00:33:23.416 --rc genhtml_function_coverage=1 00:33:23.416 --rc genhtml_legend=1 00:33:23.416 --rc geninfo_all_blocks=1 00:33:23.416 --rc geninfo_unexecuted_blocks=1 00:33:23.416 00:33:23.416 ' 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:23.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.416 --rc genhtml_branch_coverage=1 00:33:23.416 --rc genhtml_function_coverage=1 00:33:23.416 --rc genhtml_legend=1 00:33:23.416 --rc geninfo_all_blocks=1 00:33:23.416 --rc geninfo_unexecuted_blocks=1 00:33:23.416 00:33:23.416 ' 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:23.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.416 --rc genhtml_branch_coverage=1 00:33:23.416 --rc genhtml_function_coverage=1 00:33:23.416 --rc genhtml_legend=1 00:33:23.416 --rc geninfo_all_blocks=1 00:33:23.416 --rc geninfo_unexecuted_blocks=1 00:33:23.416 00:33:23.416 ' 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:23.416 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:23.417 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:23.417 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.417 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.417 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.417 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:23.417 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:23.417 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:23.417 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:23.417 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:23.676 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:23.676 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:23.676 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:23.676 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:23.676 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.676 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:23.676 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:23.676 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:23.676 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.676 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.676 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.676 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:23.676 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:23.676 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:23.676 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.243 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:30.243 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:30.243 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:30.243 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:30.243 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:30.243 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:30.243 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:30.243 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:30.243 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:30.243 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:30.243 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:30.243 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:30.243 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:30.243 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:30.243 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:30.243 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:30.243 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:30.244 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:30.244 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:30.244 Found net devices under 0000:af:00.0: cvl_0_0 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:30.244 Found net devices under 0000:af:00.1: cvl_0_1 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:30.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:33:30.244 00:33:30.244 --- 10.0.0.2 ping statistics --- 00:33:30.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.244 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:30.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:33:30.244 00:33:30.244 --- 10.0.0.1 ping statistics --- 00:33:30.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.244 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1883578 00:33:30.244 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1883578 00:33:30.245 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:30.245 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1883578 ']' 00:33:30.245 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.245 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:30.245 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.245 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:30.245 14:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.245 [2024-12-10 14:35:30.892628] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:30.245 [2024-12-10 14:35:30.893620] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:33:30.245 [2024-12-10 14:35:30.893659] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.245 [2024-12-10 14:35:30.979308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:30.504 [2024-12-10 14:35:31.021697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.504 [2024-12-10 14:35:31.021730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:30.504 [2024-12-10 14:35:31.021738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:30.504 [2024-12-10 14:35:31.021744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:30.504 [2024-12-10 14:35:31.021750] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.504 [2024-12-10 14:35:31.023136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.504 [2024-12-10 14:35:31.023168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:30.504 [2024-12-10 14:35:31.023275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.504 [2024-12-10 14:35:31.023276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:30.504 [2024-12-10 14:35:31.093162] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:30.504 [2024-12-10 14:35:31.093484] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:30.504 [2024-12-10 14:35:31.094001] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:30.504 [2024-12-10 14:35:31.094188] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:30.504 [2024-12-10 14:35:31.094213] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.504 [2024-12-10 14:35:31.160103] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.504 Malloc0 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.504 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.763 [2024-12-10 14:35:31.248395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:30.763 test case1: single bdev can't be used in multiple subsystems 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.763 [2024-12-10 14:35:31.283823] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:30.763 [2024-12-10 14:35:31.283847] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:30.763 [2024-12-10 14:35:31.283854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.763 request: 00:33:30.763 { 00:33:30.763 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:30.763 "namespace": { 00:33:30.763 "bdev_name": "Malloc0", 00:33:30.763 "no_auto_visible": false, 00:33:30.763 "hide_metadata": false 00:33:30.763 }, 00:33:30.763 "method": "nvmf_subsystem_add_ns", 00:33:30.763 "req_id": 1 00:33:30.763 } 00:33:30.763 Got JSON-RPC error response 00:33:30.763 response: 00:33:30.763 { 00:33:30.763 "code": -32602, 00:33:30.763 "message": "Invalid parameters" 00:33:30.763 } 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:30.763 Adding namespace failed - expected result. 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:30.763 test case2: host connect to nvmf target in multiple paths 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.763 [2024-12-10 14:35:31.295928] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.763 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:31.022 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:31.280 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:31.280 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:33:31.280 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:31.280 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:31.280 14:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:33:33.181 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:33.181 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:33.181 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:33.181 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:33.181 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:33.181 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:33:33.181 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:33.181 [global] 00:33:33.181 thread=1 00:33:33.181 invalidate=1 00:33:33.181 rw=write 00:33:33.181 time_based=1 00:33:33.181 runtime=1 00:33:33.181 ioengine=libaio 00:33:33.181 direct=1 00:33:33.181 bs=4096 00:33:33.181 iodepth=1 00:33:33.181 norandommap=0 00:33:33.181 numjobs=1 00:33:33.181 00:33:33.181 verify_dump=1 00:33:33.181 verify_backlog=512 00:33:33.181 verify_state_save=0 00:33:33.181 do_verify=1 00:33:33.181 verify=crc32c-intel 00:33:33.181 [job0] 00:33:33.181 filename=/dev/nvme0n1 00:33:33.181 Could not set queue depth (nvme0n1) 00:33:33.439 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:33.439 fio-3.35 00:33:33.439 Starting 1 thread 00:33:34.815 00:33:34.815 job0: (groupid=0, jobs=1): err= 0: pid=1884378: Tue Dec 10 14:35:35 2024 00:33:34.815 read: IOPS=22, BW=89.9KiB/s (92.1kB/s)(92.0KiB/1023msec) 00:33:34.815 slat (nsec): min=9389, max=25059, avg=23137.17, stdev=3023.31 00:33:34.815 clat (usec): min=40876, max=41036, avg=40959.64, stdev=36.96 00:33:34.815 lat (usec): min=40885, max=41059, avg=40982.78, stdev=38.38 00:33:34.815 clat percentiles (usec): 00:33:34.815 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:34.815 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:34.815 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:34.815 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:34.815 | 99.99th=[41157] 00:33:34.815 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:33:34.815 slat (nsec): min=9720, max=44649, avg=11125.99, stdev=2512.93 00:33:34.815 clat (usec): min=116, max=339, avg=142.88, stdev=10.97 00:33:34.815 lat (usec): min=141, max=383, avg=154.00, stdev=12.28 00:33:34.815 clat percentiles (usec): 00:33:34.815 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 139], 00:33:34.815 | 30.00th=[ 141], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 143], 00:33:34.815 | 70.00th=[ 145], 80.00th=[ 147], 90.00th=[ 149], 95.00th=[ 153], 00:33:34.815 | 99.00th=[ 163], 99.50th=[ 190], 99.90th=[ 338], 99.95th=[ 338], 00:33:34.815 | 99.99th=[ 338] 00:33:34.815 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:33:34.815 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:34.815 lat (usec) : 250=95.51%, 500=0.19% 00:33:34.815 lat (msec) : 50=4.30% 00:33:34.815 cpu : usr=0.78%, sys=0.49%, ctx=535, majf=0, minf=1 00:33:34.815 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.815 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.816 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:34.816 00:33:34.816 Run status group 0 (all jobs): 00:33:34.816 READ: bw=89.9KiB/s (92.1kB/s), 89.9KiB/s-89.9KiB/s (92.1kB/s-92.1kB/s), io=92.0KiB (94.2kB), run=1023-1023msec 00:33:34.816 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:33:34.816 00:33:34.816 Disk stats (read/write): 00:33:34.816 nvme0n1: ios=69/512, merge=0/0, ticks=794/66, in_queue=860, util=91.18% 00:33:34.816 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:34.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:34.816 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:34.816 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:33:34.816 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:34.816 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:34.816 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:34.816 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:34.816 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:33:34.816 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:34.816 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:34.816 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:34.816 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:34.816 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:34.816 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:34.816 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:34.816 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:34.816 rmmod nvme_tcp 00:33:34.816 rmmod nvme_fabrics 00:33:35.074 rmmod nvme_keyring 00:33:35.074 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:35.075 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:35.075 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:35.075 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1883578 ']' 00:33:35.075 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1883578 00:33:35.075 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1883578 ']' 00:33:35.075 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1883578 00:33:35.075 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:33:35.075 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:35.075 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1883578 00:33:35.075 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:35.075 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:35.075 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1883578' 00:33:35.075 killing process with pid 1883578 00:33:35.075 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1883578 00:33:35.075 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1883578 00:33:35.334 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:35.334 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:35.334 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:35.334 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:35.334 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:33:35.334 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:35.334 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:33:35.334 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:35.334 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:35.334 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.334 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:35.334 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.240 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:37.240 00:33:37.240 real 0m13.971s 00:33:37.240 user 0m24.469s 00:33:37.240 sys 0m6.670s 00:33:37.240 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:37.240 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:37.240 ************************************ 00:33:37.240 END TEST nvmf_nmic 00:33:37.240 ************************************ 00:33:37.240 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:37.240 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:37.240 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:37.240 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:37.500 ************************************ 00:33:37.501 START TEST nvmf_fio_target 00:33:37.501 ************************************ 00:33:37.501 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:37.501 * Looking for test storage... 00:33:37.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:37.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.501 --rc genhtml_branch_coverage=1 00:33:37.501 --rc genhtml_function_coverage=1 00:33:37.501 --rc genhtml_legend=1 00:33:37.501 --rc geninfo_all_blocks=1 00:33:37.501 --rc geninfo_unexecuted_blocks=1 00:33:37.501 00:33:37.501 ' 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:37.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.501 --rc genhtml_branch_coverage=1 00:33:37.501 --rc genhtml_function_coverage=1 00:33:37.501 --rc genhtml_legend=1 00:33:37.501 --rc geninfo_all_blocks=1 00:33:37.501 --rc geninfo_unexecuted_blocks=1 00:33:37.501 00:33:37.501 ' 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:37.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.501 --rc genhtml_branch_coverage=1 00:33:37.501 --rc genhtml_function_coverage=1 00:33:37.501 --rc genhtml_legend=1 00:33:37.501 --rc geninfo_all_blocks=1 00:33:37.501 --rc geninfo_unexecuted_blocks=1 00:33:37.501 00:33:37.501 ' 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:37.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.501 --rc genhtml_branch_coverage=1 00:33:37.501 --rc genhtml_function_coverage=1 00:33:37.501 --rc genhtml_legend=1 00:33:37.501 --rc geninfo_all_blocks=1 00:33:37.501 --rc geninfo_unexecuted_blocks=1 00:33:37.501 00:33:37.501 ' 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.501 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:37.502 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:44.158 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:44.158 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:44.158 Found net devices under 0000:af:00.0: cvl_0_0 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:44.158 Found net devices under 0000:af:00.1: cvl_0_1 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:44.158 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:44.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:44.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:33:44.159 00:33:44.159 --- 10.0.0.2 ping statistics --- 00:33:44.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:44.159 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:44.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:44.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:33:44.159 00:33:44.159 --- 10.0.0.1 ping statistics --- 00:33:44.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:44.159 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1888410 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1888410 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1888410 ']' 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:44.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:44.159 14:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:44.442 [2024-12-10 14:35:44.941089] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:44.442 [2024-12-10 14:35:44.942062] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:33:44.442 [2024-12-10 14:35:44.942102] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:44.442 [2024-12-10 14:35:45.028396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:44.442 [2024-12-10 14:35:45.067621] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:44.442 [2024-12-10 14:35:45.067659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:44.442 [2024-12-10 14:35:45.067666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:44.442 [2024-12-10 14:35:45.067673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:44.442 [2024-12-10 14:35:45.067678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:44.442 [2024-12-10 14:35:45.069097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.442 [2024-12-10 14:35:45.069204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:44.442 [2024-12-10 14:35:45.069292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.442 [2024-12-10 14:35:45.069292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:44.442 [2024-12-10 14:35:45.137232] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:44.442 [2024-12-10 14:35:45.137381] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:44.443 [2024-12-10 14:35:45.138025] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:44.443 [2024-12-10 14:35:45.138192] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:44.443 [2024-12-10 14:35:45.138270] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:45.379 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:45.379 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:33:45.379 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:45.379 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:45.379 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:45.379 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:45.379 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:45.379 [2024-12-10 14:35:45.986061] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:45.379 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:45.638 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:45.638 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:45.897 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:45.897 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:46.156 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:46.156 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:46.156 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:46.156 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:46.415 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:46.674 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:46.674 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:46.932 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:46.932 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:47.192 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:47.192 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:47.192 14:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:47.452 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:47.452 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:47.711 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:47.711 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:47.711 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:47.969 [2024-12-10 14:35:48.605992] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.969 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:48.227 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:48.486 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:48.745 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:48.745 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:33:48.745 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:48.745 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:33:48.745 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:33:48.745 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:33:50.646 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:50.646 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:50.646 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:50.646 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:33:50.646 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:50.646 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:33:50.646 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:50.646 [global] 00:33:50.646 thread=1 00:33:50.646 invalidate=1 00:33:50.646 rw=write 00:33:50.646 time_based=1 00:33:50.646 runtime=1 00:33:50.646 ioengine=libaio 00:33:50.646 direct=1 00:33:50.646 bs=4096 00:33:50.646 iodepth=1 00:33:50.646 norandommap=0 00:33:50.646 numjobs=1 00:33:50.646 00:33:50.646 verify_dump=1 00:33:50.646 verify_backlog=512 00:33:50.646 verify_state_save=0 00:33:50.646 do_verify=1 00:33:50.646 verify=crc32c-intel 00:33:50.646 [job0] 00:33:50.646 filename=/dev/nvme0n1 00:33:50.931 [job1] 00:33:50.931 filename=/dev/nvme0n2 00:33:50.931 [job2] 00:33:50.931 filename=/dev/nvme0n3 00:33:50.931 [job3] 00:33:50.931 filename=/dev/nvme0n4 00:33:50.931 Could not set queue depth (nvme0n1) 00:33:50.931 Could not set queue depth (nvme0n2) 00:33:50.931 Could not set queue depth (nvme0n3) 00:33:50.931 Could not set queue depth (nvme0n4) 00:33:51.193 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:51.193 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:51.193 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:51.193 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:51.193 fio-3.35 00:33:51.193 Starting 4 threads 00:33:52.564 00:33:52.564 job0: (groupid=0, jobs=1): err= 0: pid=1889743: Tue Dec 10 14:35:52 2024 00:33:52.564 read: IOPS=900, BW=3601KiB/s (3688kB/s)(3648KiB/1013msec) 00:33:52.564 slat (nsec): min=6871, max=26404, avg=8105.61, stdev=2271.76 00:33:52.564 clat (usec): min=179, max=41054, avg=883.82, stdev=5184.25 00:33:52.564 lat (usec): min=186, max=41077, avg=891.93, stdev=5185.89 00:33:52.564 clat percentiles (usec): 00:33:52.564 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 202], 00:33:52.564 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 212], 00:33:52.564 | 70.00th=[ 215], 80.00th=[ 217], 90.00th=[ 231], 95.00th=[ 253], 00:33:52.564 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:52.564 | 99.99th=[41157] 00:33:52.564 write: IOPS=1010, BW=4043KiB/s (4140kB/s)(4096KiB/1013msec); 0 zone resets 00:33:52.564 slat (nsec): min=9903, max=41704, avg=11210.49, stdev=2030.45 00:33:52.564 clat (usec): min=125, max=364, avg=177.40, stdev=32.45 00:33:52.564 lat (usec): min=135, max=406, avg=188.61, stdev=32.88 00:33:52.564 clat percentiles (usec): 00:33:52.564 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 147], 00:33:52.564 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 165], 60.00th=[ 196], 00:33:52.564 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 219], 95.00th=[ 227], 00:33:52.564 | 99.00th=[ 243], 99.50th=[ 251], 99.90th=[ 277], 99.95th=[ 367], 00:33:52.564 | 99.99th=[ 367] 00:33:52.564 bw ( KiB/s): min= 8192, max= 8192, per=51.50%, avg=8192.00, stdev= 0.00, samples=1 00:33:52.564 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:52.564 lat (usec) : 250=97.11%, 500=1.96%, 750=0.15% 00:33:52.564 lat (msec) : 50=0.77% 00:33:52.564 cpu : usr=1.68%, sys=2.87%, ctx=1936, majf=0, minf=1 00:33:52.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.564 issued rwts: total=912,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:52.564 job1: (groupid=0, jobs=1): err= 0: pid=1889745: Tue Dec 10 14:35:52 2024 00:33:52.564 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:33:52.564 slat (nsec): min=10218, max=25497, avg=24231.82, stdev=3211.81 00:33:52.564 clat (usec): min=40529, max=41978, avg=41305.32, stdev=509.35 00:33:52.564 lat (usec): min=40539, max=42003, avg=41329.55, stdev=510.54 00:33:52.564 clat percentiles (usec): 00:33:52.564 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:33:52.564 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:52.564 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:52.564 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:52.564 | 99.99th=[42206] 00:33:52.565 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:33:52.565 slat (nsec): min=10783, max=41550, avg=12255.62, stdev=2018.39 00:33:52.565 clat (usec): min=132, max=392, avg=211.89, stdev=30.71 00:33:52.565 lat (usec): min=143, max=429, avg=224.15, stdev=31.06 00:33:52.565 clat percentiles (usec): 00:33:52.565 | 1.00th=[ 143], 5.00th=[ 161], 10.00th=[ 186], 20.00th=[ 196], 00:33:52.565 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 215], 00:33:52.565 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 241], 95.00th=[ 277], 00:33:52.565 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 392], 99.95th=[ 392], 00:33:52.565 | 99.99th=[ 392] 00:33:52.565 bw ( KiB/s): min= 4096, max= 4096, per=25.75%, avg=4096.00, stdev= 0.00, samples=1 00:33:52.565 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:52.565 lat (usec) : 250=86.70%, 500=9.18% 00:33:52.565 lat (msec) : 50=4.12% 00:33:52.565 cpu : usr=0.39%, sys=0.97%, ctx=537, majf=0, minf=2 00:33:52.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.565 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:52.565 job2: (groupid=0, jobs=1): err= 0: pid=1889749: Tue Dec 10 14:35:52 2024 00:33:52.565 read: IOPS=1534, BW=6139KiB/s (6287kB/s)(6164KiB/1004msec) 00:33:52.565 slat (nsec): min=6861, max=25464, avg=7775.96, stdev=934.23 00:33:52.565 clat (usec): min=209, max=41062, avg=394.87, stdev=2313.22 00:33:52.565 lat (usec): min=217, max=41083, avg=402.65, stdev=2313.96 00:33:52.565 clat percentiles (usec): 00:33:52.565 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 245], 00:33:52.565 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:33:52.565 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 297], 95.00th=[ 302], 00:33:52.565 | 99.00th=[ 314], 99.50th=[ 347], 99.90th=[41157], 99.95th=[41157], 00:33:52.565 | 99.99th=[41157] 00:33:52.565 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:33:52.565 slat (nsec): min=9430, max=71194, avg=11033.08, stdev=1950.82 00:33:52.565 clat (usec): min=126, max=1039, avg=171.76, stdev=37.83 00:33:52.565 lat (usec): min=139, max=1049, avg=182.79, stdev=37.99 00:33:52.565 clat percentiles (usec): 00:33:52.565 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:33:52.565 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 163], 60.00th=[ 169], 00:33:52.565 | 70.00th=[ 176], 80.00th=[ 186], 90.00th=[ 241], 95.00th=[ 243], 00:33:52.565 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 388], 99.95th=[ 408], 00:33:52.565 | 99.99th=[ 1037] 00:33:52.565 bw ( KiB/s): min= 8192, max= 8192, per=51.50%, avg=8192.00, stdev= 0.00, samples=2 00:33:52.565 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:33:52.565 lat (usec) : 250=77.79%, 500=22.01%, 750=0.03% 00:33:52.565 lat (msec) : 2=0.03%, 50=0.14% 00:33:52.565 cpu : usr=2.79%, sys=2.69%, ctx=3590, majf=0, minf=1 00:33:52.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.565 issued rwts: total=1541,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:52.565 job3: (groupid=0, jobs=1): err= 0: pid=1889750: Tue Dec 10 14:35:52 2024 00:33:52.565 read: IOPS=23, BW=93.8KiB/s (96.0kB/s)(96.0KiB/1024msec) 00:33:52.565 slat (nsec): min=9407, max=26091, avg=20367.25, stdev=4090.91 00:33:52.565 clat (usec): min=323, max=42455, avg=37946.73, stdev=11589.47 00:33:52.565 lat (usec): min=345, max=42465, avg=37967.10, stdev=11588.39 00:33:52.565 clat percentiles (usec): 00:33:52.565 | 1.00th=[ 326], 5.00th=[ 388], 10.00th=[40633], 20.00th=[41157], 00:33:52.565 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:52.565 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:52.565 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:52.565 | 99.99th=[42206] 00:33:52.565 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:33:52.565 slat (nsec): min=10802, max=48182, avg=12086.90, stdev=1920.00 00:33:52.565 clat (usec): min=132, max=368, avg=204.44, stdev=20.73 00:33:52.565 lat (usec): min=143, max=416, avg=216.52, stdev=21.35 00:33:52.565 clat percentiles (usec): 00:33:52.565 | 1.00th=[ 141], 5.00th=[ 163], 10.00th=[ 182], 20.00th=[ 196], 00:33:52.565 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 210], 00:33:52.565 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 223], 95.00th=[ 231], 00:33:52.565 | 99.00th=[ 251], 99.50th=[ 269], 99.90th=[ 367], 99.95th=[ 367], 00:33:52.565 | 99.99th=[ 367] 00:33:52.565 bw ( KiB/s): min= 4096, max= 4096, per=25.75%, avg=4096.00, stdev= 0.00, samples=1 00:33:52.565 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:52.565 lat (usec) : 250=94.40%, 500=1.49% 00:33:52.565 lat (msec) : 50=4.10% 00:33:52.565 cpu : usr=0.20%, sys=0.68%, ctx=536, majf=0, minf=2 00:33:52.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.565 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:52.565 00:33:52.565 Run status group 0 (all jobs): 00:33:52.565 READ: bw=9705KiB/s (9938kB/s), 85.4KiB/s-6139KiB/s (87.5kB/s-6287kB/s), io=9996KiB (10.2MB), run=1004-1030msec 00:33:52.565 WRITE: bw=15.5MiB/s (16.3MB/s), 1988KiB/s-8159KiB/s (2036kB/s-8355kB/s), io=16.0MiB (16.8MB), run=1004-1030msec 00:33:52.565 00:33:52.565 Disk stats (read/write): 00:33:52.565 nvme0n1: ios=957/1024, merge=0/0, ticks=614/166, in_queue=780, util=86.17% 00:33:52.565 nvme0n2: ios=67/512, merge=0/0, ticks=1248/106, in_queue=1354, util=97.15% 00:33:52.565 nvme0n3: ios=1537/2048, merge=0/0, ticks=451/347, in_queue=798, util=88.67% 00:33:52.565 nvme0n4: ios=18/512, merge=0/0, ticks=707/101, in_queue=808, util=89.61% 00:33:52.565 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:33:52.565 [global] 00:33:52.565 thread=1 00:33:52.565 invalidate=1 00:33:52.565 rw=randwrite 00:33:52.565 time_based=1 00:33:52.565 runtime=1 00:33:52.565 ioengine=libaio 00:33:52.565 direct=1 00:33:52.565 bs=4096 00:33:52.565 iodepth=1 00:33:52.565 norandommap=0 00:33:52.565 numjobs=1 00:33:52.565 00:33:52.565 verify_dump=1 00:33:52.565 verify_backlog=512 00:33:52.565 verify_state_save=0 00:33:52.565 do_verify=1 00:33:52.565 verify=crc32c-intel 00:33:52.565 [job0] 00:33:52.565 filename=/dev/nvme0n1 00:33:52.565 [job1] 00:33:52.565 filename=/dev/nvme0n2 00:33:52.565 [job2] 00:33:52.565 filename=/dev/nvme0n3 00:33:52.565 [job3] 00:33:52.565 filename=/dev/nvme0n4 00:33:52.565 Could not set queue depth (nvme0n1) 00:33:52.565 Could not set queue depth (nvme0n2) 00:33:52.565 Could not set queue depth (nvme0n3) 00:33:52.565 Could not set queue depth (nvme0n4) 00:33:52.565 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:52.565 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:52.565 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:52.565 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:52.565 fio-3.35 00:33:52.565 Starting 4 threads 00:33:53.938 00:33:53.938 job0: (groupid=0, jobs=1): err= 0: pid=1890109: Tue Dec 10 14:35:54 2024 00:33:53.938 read: IOPS=1465, BW=5863KiB/s (6003kB/s)(5892KiB/1005msec) 00:33:53.938 slat (nsec): min=7114, max=43782, avg=8570.39, stdev=1900.18 00:33:53.938 clat (usec): min=199, max=42044, avg=474.75, stdev=3085.25 00:33:53.938 lat (usec): min=219, max=42065, avg=483.32, stdev=3086.16 00:33:53.938 clat percentiles (usec): 00:33:53.938 | 1.00th=[ 217], 5.00th=[ 221], 10.00th=[ 223], 20.00th=[ 225], 00:33:53.938 | 30.00th=[ 229], 40.00th=[ 231], 50.00th=[ 233], 60.00th=[ 235], 00:33:53.938 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 260], 00:33:53.938 | 99.00th=[ 314], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:33:53.938 | 99.99th=[42206] 00:33:53.938 write: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec); 0 zone resets 00:33:53.938 slat (nsec): min=9689, max=38109, avg=10937.38, stdev=1643.33 00:33:53.938 clat (usec): min=152, max=312, avg=173.91, stdev=10.62 00:33:53.938 lat (usec): min=162, max=343, avg=184.84, stdev=11.17 00:33:53.938 clat percentiles (usec): 00:33:53.938 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:33:53.938 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:33:53.938 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 186], 95.00th=[ 192], 00:33:53.938 | 99.00th=[ 208], 99.50th=[ 212], 99.90th=[ 289], 99.95th=[ 314], 00:33:53.938 | 99.99th=[ 314] 00:33:53.938 bw ( KiB/s): min= 1720, max=10568, per=21.71%, avg=6144.00, stdev=6256.48, samples=2 00:33:53.938 iops : min= 430, max= 2642, avg=1536.00, stdev=1564.12, samples=2 00:33:53.938 lat (usec) : 250=95.38%, 500=4.32% 00:33:53.938 lat (msec) : 50=0.30% 00:33:53.938 cpu : usr=2.39%, sys=4.78%, ctx=3009, majf=0, minf=2 00:33:53.938 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:53.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.938 issued rwts: total=1473,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.938 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:53.938 job1: (groupid=0, jobs=1): err= 0: pid=1890110: Tue Dec 10 14:35:54 2024 00:33:53.938 read: IOPS=2188, BW=8755KiB/s (8965kB/s)(8764KiB/1001msec) 00:33:53.938 slat (usec): min=2, max=179, avg= 8.44, stdev= 4.97 00:33:53.938 clat (usec): min=178, max=2094, avg=236.40, stdev=80.25 00:33:53.938 lat (usec): min=181, max=2105, avg=244.84, stdev=79.71 00:33:53.938 clat percentiles (usec): 00:33:53.938 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:33:53.938 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 229], 00:33:53.938 | 70.00th=[ 235], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:33:53.938 | 99.00th=[ 392], 99.50th=[ 404], 99.90th=[ 1975], 99.95th=[ 2040], 00:33:53.938 | 99.99th=[ 2089] 00:33:53.938 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:33:53.938 slat (usec): min=3, max=1967, avg=12.18, stdev=38.90 00:33:53.938 clat (usec): min=111, max=361, avg=163.75, stdev=16.24 00:33:53.938 lat (usec): min=131, max=2136, avg=175.93, stdev=43.01 00:33:53.938 clat percentiles (usec): 00:33:53.938 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:33:53.938 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:33:53.938 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 194], 00:33:53.938 | 99.00th=[ 212], 99.50th=[ 221], 99.90th=[ 243], 99.95th=[ 269], 00:33:53.938 | 99.99th=[ 363] 00:33:53.938 bw ( KiB/s): min=10944, max=10944, per=38.67%, avg=10944.00, stdev= 0.00, samples=1 00:33:53.938 iops : min= 2736, max= 2736, avg=2736.00, stdev= 0.00, samples=1 00:33:53.938 lat (usec) : 250=89.14%, 500=10.78% 00:33:53.938 lat (msec) : 2=0.04%, 4=0.04% 00:33:53.938 cpu : usr=4.30%, sys=6.60%, ctx=4753, majf=0, minf=1 00:33:53.938 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:53.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.938 issued rwts: total=2191,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.938 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:53.938 job2: (groupid=0, jobs=1): err= 0: pid=1890112: Tue Dec 10 14:35:54 2024 00:33:53.938 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:33:53.938 slat (nsec): min=9932, max=29454, avg=23475.82, stdev=3454.23 00:33:53.938 clat (usec): min=40765, max=45042, avg=41188.87, stdev=888.62 00:33:53.938 lat (usec): min=40775, max=45071, avg=41212.34, stdev=890.00 00:33:53.938 clat percentiles (usec): 00:33:53.938 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:33:53.938 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:53.938 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:33:53.938 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:33:53.938 | 99.99th=[44827] 00:33:53.938 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:33:53.938 slat (nsec): min=10765, max=47536, avg=12872.79, stdev=2598.93 00:33:53.938 clat (usec): min=148, max=336, avg=191.10, stdev=19.81 00:33:53.938 lat (usec): min=160, max=371, avg=203.97, stdev=20.62 00:33:53.938 clat percentiles (usec): 00:33:53.938 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 176], 00:33:53.938 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 194], 00:33:53.938 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 215], 95.00th=[ 223], 00:33:53.939 | 99.00th=[ 253], 99.50th=[ 277], 99.90th=[ 338], 99.95th=[ 338], 00:33:53.939 | 99.99th=[ 338] 00:33:53.939 bw ( KiB/s): min= 4096, max= 4096, per=14.47%, avg=4096.00, stdev= 0.00, samples=1 00:33:53.939 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:53.939 lat (usec) : 250=94.57%, 500=1.31% 00:33:53.939 lat (msec) : 50=4.12% 00:33:53.939 cpu : usr=0.99%, sys=0.49%, ctx=536, majf=0, minf=1 00:33:53.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:53.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.939 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:53.939 job3: (groupid=0, jobs=1): err= 0: pid=1890117: Tue Dec 10 14:35:54 2024 00:33:53.939 read: IOPS=2128, BW=8515KiB/s (8720kB/s)(8524KiB/1001msec) 00:33:53.939 slat (nsec): min=6969, max=27338, avg=7907.27, stdev=914.18 00:33:53.939 clat (usec): min=202, max=1828, avg=233.77, stdev=39.67 00:33:53.939 lat (usec): min=210, max=1837, avg=241.68, stdev=39.78 00:33:53.939 clat percentiles (usec): 00:33:53.939 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 221], 00:33:53.939 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:33:53.939 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 251], 95.00th=[ 258], 00:33:53.939 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 449], 99.95th=[ 635], 00:33:53.939 | 99.99th=[ 1827] 00:33:53.939 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:33:53.939 slat (nsec): min=10155, max=44684, avg=11401.54, stdev=1322.15 00:33:53.939 clat (usec): min=136, max=370, avg=173.53, stdev=22.37 00:33:53.939 lat (usec): min=146, max=382, avg=184.93, stdev=22.54 00:33:53.939 clat percentiles (usec): 00:33:53.939 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:33:53.939 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:33:53.939 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 198], 95.00th=[ 215], 00:33:53.939 | 99.00th=[ 265], 99.50th=[ 285], 99.90th=[ 351], 99.95th=[ 367], 00:33:53.939 | 99.99th=[ 371] 00:33:53.939 bw ( KiB/s): min=10480, max=10480, per=37.03%, avg=10480.00, stdev= 0.00, samples=1 00:33:53.939 iops : min= 2620, max= 2620, avg=2620.00, stdev= 0.00, samples=1 00:33:53.939 lat (usec) : 250=94.48%, 500=5.48%, 750=0.02% 00:33:53.939 lat (msec) : 2=0.02% 00:33:53.939 cpu : usr=2.10%, sys=5.20%, ctx=4692, majf=0, minf=1 00:33:53.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:53.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.939 issued rwts: total=2131,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:53.939 00:33:53.939 Run status group 0 (all jobs): 00:33:53.939 READ: bw=22.4MiB/s (23.5MB/s), 86.9KiB/s-8755KiB/s (89.0kB/s-8965kB/s), io=22.7MiB (23.8MB), run=1001-1013msec 00:33:53.939 WRITE: bw=27.6MiB/s (29.0MB/s), 2022KiB/s-9.99MiB/s (2070kB/s-10.5MB/s), io=28.0MiB (29.4MB), run=1001-1013msec 00:33:53.939 00:33:53.939 Disk stats (read/write): 00:33:53.939 nvme0n1: ios=1518/1536, merge=0/0, ticks=509/249, in_queue=758, util=85.77% 00:33:53.939 nvme0n2: ios=1948/2048, merge=0/0, ticks=777/312, in_queue=1089, util=100.00% 00:33:53.939 nvme0n3: ios=75/512, merge=0/0, ticks=1624/90, in_queue=1714, util=97.38% 00:33:53.939 nvme0n4: ios=1917/2048, merge=0/0, ticks=1327/346, in_queue=1673, util=99.05% 00:33:53.939 14:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:53.939 [global] 00:33:53.939 thread=1 00:33:53.939 invalidate=1 00:33:53.939 rw=write 00:33:53.939 time_based=1 00:33:53.939 runtime=1 00:33:53.939 ioengine=libaio 00:33:53.939 direct=1 00:33:53.939 bs=4096 00:33:53.939 iodepth=128 00:33:53.939 norandommap=0 00:33:53.939 numjobs=1 00:33:53.939 00:33:53.939 verify_dump=1 00:33:53.939 verify_backlog=512 00:33:53.939 verify_state_save=0 00:33:53.939 do_verify=1 00:33:53.939 verify=crc32c-intel 00:33:53.939 [job0] 00:33:53.939 filename=/dev/nvme0n1 00:33:53.939 [job1] 00:33:53.939 filename=/dev/nvme0n2 00:33:53.939 [job2] 00:33:53.939 filename=/dev/nvme0n3 00:33:53.939 [job3] 00:33:53.939 filename=/dev/nvme0n4 00:33:53.939 Could not set queue depth (nvme0n1) 00:33:53.939 Could not set queue depth (nvme0n2) 00:33:53.939 Could not set queue depth (nvme0n3) 00:33:53.939 Could not set queue depth (nvme0n4) 00:33:54.196 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:54.196 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:54.196 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:54.196 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:54.196 fio-3.35 00:33:54.196 Starting 4 threads 00:33:55.567 00:33:55.567 job0: (groupid=0, jobs=1): err= 0: pid=1890485: Tue Dec 10 14:35:56 2024 00:33:55.567 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:33:55.567 slat (nsec): min=1538, max=14022k, avg=88671.26, stdev=715565.60 00:33:55.567 clat (usec): min=1899, max=53411, avg=12577.12, stdev=8462.98 00:33:55.567 lat (usec): min=1932, max=53421, avg=12665.79, stdev=8525.36 00:33:55.567 clat percentiles (usec): 00:33:55.567 | 1.00th=[ 4178], 5.00th=[ 5276], 10.00th=[ 7308], 20.00th=[ 8291], 00:33:55.567 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[10028], 00:33:55.567 | 70.00th=[11863], 80.00th=[14091], 90.00th=[21627], 95.00th=[34341], 00:33:55.567 | 99.00th=[44827], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:33:55.567 | 99.99th=[53216] 00:33:55.567 write: IOPS=5557, BW=21.7MiB/s (22.8MB/s)(21.8MiB/1004msec); 0 zone resets 00:33:55.567 slat (usec): min=2, max=9650, avg=86.81, stdev=552.67 00:33:55.567 clat (usec): min=1718, max=40965, avg=11196.51, stdev=5597.56 00:33:55.567 lat (usec): min=2488, max=40971, avg=11283.32, stdev=5650.61 00:33:55.567 clat percentiles (usec): 00:33:55.567 | 1.00th=[ 4948], 5.00th=[ 5997], 10.00th=[ 6456], 20.00th=[ 8717], 00:33:55.567 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:33:55.567 | 70.00th=[10028], 80.00th=[12649], 90.00th=[20579], 95.00th=[22152], 00:33:55.567 | 99.00th=[36439], 99.50th=[38011], 99.90th=[41157], 99.95th=[41157], 00:33:55.567 | 99.99th=[41157] 00:33:55.567 bw ( KiB/s): min=16488, max=27136, per=29.48%, avg=21812.00, stdev=7529.27, samples=2 00:33:55.567 iops : min= 4122, max= 6784, avg=5453.00, stdev=1882.32, samples=2 00:33:55.567 lat (msec) : 2=0.12%, 4=0.66%, 10=62.84%, 20=25.32%, 50=10.74% 00:33:55.567 lat (msec) : 100=0.32% 00:33:55.567 cpu : usr=4.29%, sys=7.08%, ctx=429, majf=0, minf=1 00:33:55.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:33:55.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:55.567 issued rwts: total=5120,5580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.567 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:55.567 job1: (groupid=0, jobs=1): err= 0: pid=1890486: Tue Dec 10 14:35:56 2024 00:33:55.567 read: IOPS=6466, BW=25.3MiB/s (26.5MB/s)(25.4MiB/1004msec) 00:33:55.567 slat (nsec): min=1359, max=8968.1k, avg=76539.14, stdev=568885.14 00:33:55.567 clat (usec): min=2245, max=18302, avg=9939.68, stdev=2105.09 00:33:55.567 lat (usec): min=4227, max=22632, avg=10016.22, stdev=2150.60 00:33:55.567 clat percentiles (usec): 00:33:55.567 | 1.00th=[ 6128], 5.00th=[ 7570], 10.00th=[ 8225], 20.00th=[ 8586], 00:33:55.567 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9634], 00:33:55.567 | 70.00th=[10290], 80.00th=[10945], 90.00th=[13042], 95.00th=[14746], 00:33:55.567 | 99.00th=[16581], 99.50th=[17171], 99.90th=[17695], 99.95th=[18220], 00:33:55.567 | 99.99th=[18220] 00:33:55.567 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:33:55.567 slat (usec): min=2, max=8155, avg=69.17, stdev=439.55 00:33:55.567 clat (usec): min=2672, max=18304, avg=9379.13, stdev=1927.20 00:33:55.567 lat (usec): min=2677, max=18308, avg=9448.31, stdev=1945.51 00:33:55.567 clat percentiles (usec): 00:33:55.567 | 1.00th=[ 5211], 5.00th=[ 5997], 10.00th=[ 6128], 20.00th=[ 8356], 00:33:55.567 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:33:55.567 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[12125], 95.00th=[12780], 00:33:55.567 | 99.00th=[14091], 99.50th=[14222], 99.90th=[16581], 99.95th=[17695], 00:33:55.567 | 99.99th=[18220] 00:33:55.567 bw ( KiB/s): min=25688, max=27560, per=35.99%, avg=26624.00, stdev=1323.70, samples=2 00:33:55.567 iops : min= 6422, max= 6890, avg=6656.00, stdev=330.93, samples=2 00:33:55.567 lat (msec) : 4=0.18%, 10=69.80%, 20=30.02% 00:33:55.567 cpu : usr=5.58%, sys=7.38%, ctx=582, majf=0, minf=1 00:33:55.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:33:55.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:55.567 issued rwts: total=6492,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.567 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:55.567 job2: (groupid=0, jobs=1): err= 0: pid=1890487: Tue Dec 10 14:35:56 2024 00:33:55.567 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:33:55.567 slat (nsec): min=1784, max=15938k, avg=117483.68, stdev=834494.86 00:33:55.567 clat (usec): min=3764, max=49038, avg=15293.88, stdev=8737.11 00:33:55.567 lat (usec): min=3772, max=49043, avg=15411.36, stdev=8797.78 00:33:55.567 clat percentiles (usec): 00:33:55.567 | 1.00th=[ 4113], 5.00th=[ 4883], 10.00th=[ 7177], 20.00th=[ 9503], 00:33:55.567 | 30.00th=[11076], 40.00th=[12780], 50.00th=[13829], 60.00th=[14222], 00:33:55.567 | 70.00th=[14484], 80.00th=[17957], 90.00th=[29230], 95.00th=[33817], 00:33:55.567 | 99.00th=[43779], 99.50th=[44827], 99.90th=[45876], 99.95th=[49021], 00:33:55.567 | 99.99th=[49021] 00:33:55.567 write: IOPS=4006, BW=15.7MiB/s (16.4MB/s)(15.8MiB/1008msec); 0 zone resets 00:33:55.567 slat (usec): min=2, max=20812, avg=137.61, stdev=1065.49 00:33:55.567 clat (usec): min=1509, max=58603, avg=18077.70, stdev=9172.57 00:33:55.567 lat (usec): min=1523, max=58635, avg=18215.31, stdev=9270.94 00:33:55.567 clat percentiles (usec): 00:33:55.567 | 1.00th=[ 7504], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[10683], 00:33:55.567 | 30.00th=[11207], 40.00th=[13173], 50.00th=[13435], 60.00th=[16188], 00:33:55.567 | 70.00th=[21365], 80.00th=[26870], 90.00th=[33162], 95.00th=[38011], 00:33:55.567 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44827], 99.95th=[47973], 00:33:55.567 | 99.99th=[58459] 00:33:55.567 bw ( KiB/s): min=13160, max=18128, per=21.14%, avg=15644.00, stdev=3512.91, samples=2 00:33:55.567 iops : min= 3290, max= 4532, avg=3911.00, stdev=878.23, samples=2 00:33:55.567 lat (msec) : 2=0.03%, 4=0.47%, 10=13.92%, 20=57.93%, 50=27.63% 00:33:55.567 lat (msec) : 100=0.03% 00:33:55.567 cpu : usr=3.28%, sys=5.76%, ctx=245, majf=0, minf=2 00:33:55.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:55.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:55.567 issued rwts: total=3584,4039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.567 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:55.567 job3: (groupid=0, jobs=1): err= 0: pid=1890488: Tue Dec 10 14:35:56 2024 00:33:55.567 read: IOPS=2682, BW=10.5MiB/s (11.0MB/s)(11.0MiB/1046msec) 00:33:55.567 slat (nsec): min=1726, max=14035k, avg=144748.11, stdev=944539.60 00:33:55.567 clat (usec): min=9327, max=63666, avg=19855.75, stdev=10440.68 00:33:55.567 lat (usec): min=9337, max=63674, avg=20000.50, stdev=10501.74 00:33:55.567 clat percentiles (usec): 00:33:55.567 | 1.00th=[ 9372], 5.00th=[10683], 10.00th=[11994], 20.00th=[13042], 00:33:55.567 | 30.00th=[13960], 40.00th=[15926], 50.00th=[16712], 60.00th=[17695], 00:33:55.567 | 70.00th=[18482], 80.00th=[23725], 90.00th=[31327], 95.00th=[52167], 00:33:55.567 | 99.00th=[53216], 99.50th=[57934], 99.90th=[63701], 99.95th=[63701], 00:33:55.567 | 99.99th=[63701] 00:33:55.567 write: IOPS=2936, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1046msec); 0 zone resets 00:33:55.567 slat (usec): min=2, max=24956, avg=182.87, stdev=1076.12 00:33:55.567 clat (usec): min=5320, max=64092, avg=24964.88, stdev=14304.40 00:33:55.567 lat (usec): min=5346, max=64109, avg=25147.76, stdev=14405.64 00:33:55.567 clat percentiles (usec): 00:33:55.567 | 1.00th=[ 8291], 5.00th=[10945], 10.00th=[11863], 20.00th=[13173], 00:33:55.567 | 30.00th=[17433], 40.00th=[20055], 50.00th=[21103], 60.00th=[21627], 00:33:55.567 | 70.00th=[22414], 80.00th=[30016], 90.00th=[53740], 95.00th=[55313], 00:33:55.567 | 99.00th=[63177], 99.50th=[63701], 99.90th=[64226], 99.95th=[64226], 00:33:55.567 | 99.99th=[64226] 00:33:55.567 bw ( KiB/s): min=11216, max=13360, per=16.61%, avg=12288.00, stdev=1516.04, samples=2 00:33:55.567 iops : min= 2804, max= 3340, avg=3072.00, stdev=379.01, samples=2 00:33:55.567 lat (msec) : 10=2.94%, 20=53.20%, 50=34.18%, 100=9.68% 00:33:55.567 cpu : usr=2.01%, sys=5.84%, ctx=277, majf=0, minf=1 00:33:55.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:33:55.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:55.567 issued rwts: total=2806,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.567 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:55.567 00:33:55.567 Run status group 0 (all jobs): 00:33:55.567 READ: bw=67.2MiB/s (70.5MB/s), 10.5MiB/s-25.3MiB/s (11.0MB/s-26.5MB/s), io=70.3MiB (73.7MB), run=1004-1046msec 00:33:55.567 WRITE: bw=72.2MiB/s (75.8MB/s), 11.5MiB/s-25.9MiB/s (12.0MB/s-27.2MB/s), io=75.6MiB (79.2MB), run=1004-1046msec 00:33:55.567 00:33:55.567 Disk stats (read/write): 00:33:55.567 nvme0n1: ios=4149/4469, merge=0/0, ticks=41875/40155, in_queue=82030, util=95.89% 00:33:55.567 nvme0n2: ios=5509/5632, merge=0/0, ticks=43677/39124, in_queue=82801, util=97.15% 00:33:55.567 nvme0n3: ios=3072/3518, merge=0/0, ticks=26050/32224, in_queue=58274, util=88.60% 00:33:55.567 nvme0n4: ios=2448/2560, merge=0/0, ticks=43465/60408, in_queue=103873, util=97.36% 00:33:55.567 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:55.567 [global] 00:33:55.567 thread=1 00:33:55.567 invalidate=1 00:33:55.567 rw=randwrite 00:33:55.567 time_based=1 00:33:55.567 runtime=1 00:33:55.567 ioengine=libaio 00:33:55.567 direct=1 00:33:55.567 bs=4096 00:33:55.567 iodepth=128 00:33:55.567 norandommap=0 00:33:55.567 numjobs=1 00:33:55.567 00:33:55.567 verify_dump=1 00:33:55.567 verify_backlog=512 00:33:55.567 verify_state_save=0 00:33:55.567 do_verify=1 00:33:55.567 verify=crc32c-intel 00:33:55.567 [job0] 00:33:55.567 filename=/dev/nvme0n1 00:33:55.567 [job1] 00:33:55.567 filename=/dev/nvme0n2 00:33:55.567 [job2] 00:33:55.567 filename=/dev/nvme0n3 00:33:55.567 [job3] 00:33:55.567 filename=/dev/nvme0n4 00:33:55.567 Could not set queue depth (nvme0n1) 00:33:55.567 Could not set queue depth (nvme0n2) 00:33:55.567 Could not set queue depth (nvme0n3) 00:33:55.568 Could not set queue depth (nvme0n4) 00:33:55.826 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:55.826 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:55.826 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:55.826 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:55.826 fio-3.35 00:33:55.826 Starting 4 threads 00:33:57.199 00:33:57.199 job0: (groupid=0, jobs=1): err= 0: pid=1890853: Tue Dec 10 14:35:57 2024 00:33:57.199 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:33:57.199 slat (nsec): min=1127, max=45756k, avg=140521.13, stdev=1251618.53 00:33:57.199 clat (usec): min=3827, max=97801, avg=18339.18, stdev=16085.28 00:33:57.199 lat (usec): min=3831, max=97806, avg=18479.71, stdev=16190.46 00:33:57.199 clat percentiles (usec): 00:33:57.199 | 1.00th=[ 4752], 5.00th=[ 6718], 10.00th=[ 7701], 20.00th=[ 8586], 00:33:57.199 | 30.00th=[ 9241], 40.00th=[10159], 50.00th=[12256], 60.00th=[13566], 00:33:57.199 | 70.00th=[16319], 80.00th=[28443], 90.00th=[37487], 95.00th=[53740], 00:33:57.199 | 99.00th=[78119], 99.50th=[98042], 99.90th=[98042], 99.95th=[98042], 00:33:57.199 | 99.99th=[98042] 00:33:57.199 write: IOPS=4143, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1002msec); 0 zone resets 00:33:57.199 slat (usec): min=2, max=20942, avg=89.41, stdev=639.15 00:33:57.199 clat (usec): min=1564, max=72051, avg=12531.53, stdev=8632.16 00:33:57.199 lat (usec): min=3888, max=72058, avg=12620.94, stdev=8675.59 00:33:57.199 clat percentiles (usec): 00:33:57.199 | 1.00th=[ 4555], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[ 8094], 00:33:57.199 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9896], 00:33:57.199 | 70.00th=[13173], 80.00th=[14746], 90.00th=[21103], 95.00th=[22152], 00:33:57.199 | 99.00th=[57410], 99.50th=[58983], 99.90th=[71828], 99.95th=[71828], 00:33:57.199 | 99.99th=[71828] 00:33:57.199 bw ( KiB/s): min=12288, max=20480, per=24.66%, avg=16384.00, stdev=5792.62, samples=2 00:33:57.199 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:33:57.199 lat (msec) : 2=0.01%, 4=0.25%, 10=49.89%, 20=29.95%, 50=15.39% 00:33:57.199 lat (msec) : 100=4.51% 00:33:57.199 cpu : usr=2.90%, sys=3.70%, ctx=371, majf=0, minf=1 00:33:57.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:57.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:57.199 issued rwts: total=4096,4152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:57.199 job1: (groupid=0, jobs=1): err= 0: pid=1890854: Tue Dec 10 14:35:57 2024 00:33:57.199 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:33:57.199 slat (nsec): min=1614, max=24246k, avg=102365.83, stdev=749784.22 00:33:57.199 clat (usec): min=2806, max=67705, avg=12807.22, stdev=7666.62 00:33:57.199 lat (usec): min=2815, max=67721, avg=12909.59, stdev=7743.82 00:33:57.199 clat percentiles (usec): 00:33:57.199 | 1.00th=[ 5080], 5.00th=[ 7046], 10.00th=[ 7898], 20.00th=[ 8586], 00:33:57.199 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[10945], 00:33:57.199 | 70.00th=[12256], 80.00th=[14353], 90.00th=[22152], 95.00th=[30540], 00:33:57.199 | 99.00th=[46400], 99.50th=[51643], 99.90th=[58983], 99.95th=[58983], 00:33:57.199 | 99.99th=[67634] 00:33:57.199 write: IOPS=5141, BW=20.1MiB/s (21.1MB/s)(20.1MiB/1002msec); 0 zone resets 00:33:57.199 slat (usec): min=2, max=12921, avg=85.21, stdev=621.68 00:33:57.199 clat (usec): min=553, max=67664, avg=11852.20, stdev=7688.21 00:33:57.199 lat (usec): min=2692, max=67700, avg=11937.41, stdev=7722.30 00:33:57.199 clat percentiles (usec): 00:33:57.199 | 1.00th=[ 4948], 5.00th=[ 7635], 10.00th=[ 8225], 20.00th=[ 9110], 00:33:57.199 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 00:33:57.199 | 70.00th=[10421], 80.00th=[12125], 90.00th=[16581], 95.00th=[20317], 00:33:57.199 | 99.00th=[52691], 99.50th=[61080], 99.90th=[61080], 99.95th=[61080], 00:33:57.199 | 99.99th=[67634] 00:33:57.199 bw ( KiB/s): min=17432, max=23528, per=30.82%, avg=20480.00, stdev=4310.52, samples=2 00:33:57.199 iops : min= 4358, max= 5882, avg=5120.00, stdev=1077.63, samples=2 00:33:57.199 lat (usec) : 750=0.01% 00:33:57.199 lat (msec) : 4=0.42%, 10=53.99%, 20=37.29%, 50=7.23%, 100=1.06% 00:33:57.199 cpu : usr=5.19%, sys=5.19%, ctx=330, majf=0, minf=1 00:33:57.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:33:57.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:57.199 issued rwts: total=5120,5152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:57.199 job2: (groupid=0, jobs=1): err= 0: pid=1890855: Tue Dec 10 14:35:57 2024 00:33:57.199 read: IOPS=3619, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1007msec) 00:33:57.199 slat (nsec): min=1357, max=23178k, avg=111603.90, stdev=803688.80 00:33:57.199 clat (usec): min=3679, max=42191, avg=14105.91, stdev=5334.06 00:33:57.199 lat (usec): min=3702, max=49587, avg=14217.52, stdev=5403.02 00:33:57.199 clat percentiles (usec): 00:33:57.199 | 1.00th=[ 5735], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[ 9765], 00:33:57.199 | 30.00th=[11469], 40.00th=[12256], 50.00th=[13304], 60.00th=[13829], 00:33:57.199 | 70.00th=[15401], 80.00th=[16712], 90.00th=[20055], 95.00th=[25560], 00:33:57.199 | 99.00th=[31327], 99.50th=[33424], 99.90th=[39584], 99.95th=[40109], 00:33:57.199 | 99.99th=[42206] 00:33:57.199 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:33:57.199 slat (usec): min=2, max=12480, avg=118.92, stdev=687.71 00:33:57.199 clat (usec): min=422, max=96804, avg=18652.97, stdev=13249.55 00:33:57.199 lat (usec): min=441, max=96818, avg=18771.88, stdev=13309.78 00:33:57.199 clat percentiles (usec): 00:33:57.199 | 1.00th=[ 1012], 5.00th=[ 4047], 10.00th=[ 7373], 20.00th=[ 8586], 00:33:57.199 | 30.00th=[10552], 40.00th=[11207], 50.00th=[15008], 60.00th=[19792], 00:33:57.199 | 70.00th=[21365], 80.00th=[28181], 90.00th=[35914], 95.00th=[41681], 00:33:57.199 | 99.00th=[77071], 99.50th=[85459], 99.90th=[93848], 99.95th=[93848], 00:33:57.199 | 99.99th=[96994] 00:33:57.199 bw ( KiB/s): min=15104, max=17136, per=24.26%, avg=16120.00, stdev=1436.84, samples=2 00:33:57.199 iops : min= 3776, max= 4284, avg=4030.00, stdev=359.21, samples=2 00:33:57.199 lat (usec) : 500=0.04%, 1000=0.44% 00:33:57.199 lat (msec) : 2=0.89%, 4=1.21%, 10=22.85%, 20=49.49%, 50=24.13% 00:33:57.199 lat (msec) : 100=0.94% 00:33:57.199 cpu : usr=3.58%, sys=4.17%, ctx=368, majf=0, minf=2 00:33:57.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:57.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:57.199 issued rwts: total=3645,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:57.199 job3: (groupid=0, jobs=1): err= 0: pid=1890856: Tue Dec 10 14:35:57 2024 00:33:57.200 read: IOPS=4116, BW=16.1MiB/s (16.9MB/s)(17.4MiB/1084msec) 00:33:57.200 slat (nsec): min=1368, max=43360k, avg=114332.90, stdev=918662.35 00:33:57.200 clat (usec): min=5760, max=93889, avg=15058.85, stdev=14222.66 00:33:57.200 lat (usec): min=5765, max=93891, avg=15173.18, stdev=14271.28 00:33:57.200 clat percentiles (usec): 00:33:57.200 | 1.00th=[ 5997], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10290], 00:33:57.200 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:33:57.200 | 70.00th=[11863], 80.00th=[13042], 90.00th=[18220], 95.00th=[39584], 00:33:57.200 | 99.00th=[90702], 99.50th=[93848], 99.90th=[93848], 99.95th=[93848], 00:33:57.200 | 99.99th=[93848] 00:33:57.200 write: IOPS=4250, BW=16.6MiB/s (17.4MB/s)(18.0MiB/1084msec); 0 zone resets 00:33:57.200 slat (usec): min=2, max=10729, avg=108.17, stdev=582.20 00:33:57.200 clat (usec): min=5233, max=93935, avg=15266.39, stdev=13059.16 00:33:57.200 lat (usec): min=5236, max=93965, avg=15374.56, stdev=13119.09 00:33:57.200 clat percentiles (usec): 00:33:57.200 | 1.00th=[ 5932], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9765], 00:33:57.200 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:33:57.200 | 70.00th=[12125], 80.00th=[14353], 90.00th=[22414], 95.00th=[42206], 00:33:57.200 | 99.00th=[81265], 99.50th=[86508], 99.90th=[93848], 99.95th=[93848], 00:33:57.200 | 99.99th=[93848] 00:33:57.200 bw ( KiB/s): min=14728, max=22136, per=27.74%, avg=18432.00, stdev=5238.25, samples=2 00:33:57.200 iops : min= 3682, max= 5534, avg=4608.00, stdev=1309.56, samples=2 00:33:57.200 lat (msec) : 10=18.53%, 20=70.40%, 50=7.40%, 100=3.67% 00:33:57.200 cpu : usr=2.22%, sys=3.79%, ctx=526, majf=0, minf=1 00:33:57.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:33:57.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:57.200 issued rwts: total=4462,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.200 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:57.200 00:33:57.200 Run status group 0 (all jobs): 00:33:57.200 READ: bw=62.4MiB/s (65.5MB/s), 14.1MiB/s-20.0MiB/s (14.8MB/s-20.9MB/s), io=67.7MiB (71.0MB), run=1002-1084msec 00:33:57.200 WRITE: bw=64.9MiB/s (68.0MB/s), 15.9MiB/s-20.1MiB/s (16.7MB/s-21.1MB/s), io=70.3MiB (73.8MB), run=1002-1084msec 00:33:57.200 00:33:57.200 Disk stats (read/write): 00:33:57.200 nvme0n1: ios=3398/3584, merge=0/0, ticks=33657/31524, in_queue=65181, util=86.47% 00:33:57.200 nvme0n2: ios=4124/4107, merge=0/0, ticks=26987/21655, in_queue=48642, util=96.95% 00:33:57.200 nvme0n3: ios=3387/3584, merge=0/0, ticks=44505/57926, in_queue=102431, util=94.04% 00:33:57.200 nvme0n4: ios=3641/3744, merge=0/0, ticks=15470/23586, in_queue=39056, util=94.61% 00:33:57.200 14:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:33:57.200 14:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1891084 00:33:57.200 14:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:33:57.200 14:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:33:57.200 [global] 00:33:57.200 thread=1 00:33:57.200 invalidate=1 00:33:57.200 rw=read 00:33:57.200 time_based=1 00:33:57.200 runtime=10 00:33:57.200 ioengine=libaio 00:33:57.200 direct=1 00:33:57.200 bs=4096 00:33:57.200 iodepth=1 00:33:57.200 norandommap=1 00:33:57.200 numjobs=1 00:33:57.200 00:33:57.200 [job0] 00:33:57.200 filename=/dev/nvme0n1 00:33:57.200 [job1] 00:33:57.200 filename=/dev/nvme0n2 00:33:57.200 [job2] 00:33:57.200 filename=/dev/nvme0n3 00:33:57.200 [job3] 00:33:57.200 filename=/dev/nvme0n4 00:33:57.200 Could not set queue depth (nvme0n1) 00:33:57.200 Could not set queue depth (nvme0n2) 00:33:57.200 Could not set queue depth (nvme0n3) 00:33:57.200 Could not set queue depth (nvme0n4) 00:33:57.458 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:57.458 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:57.458 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:57.458 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:57.458 fio-3.35 00:33:57.458 Starting 4 threads 00:34:00.738 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:00.738 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:00.738 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:34:00.738 fio: pid=1891227, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:00.738 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2220032, buflen=4096 00:34:00.738 fio: pid=1891226, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:00.738 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:00.738 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:00.738 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:00.738 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:00.738 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1650688, buflen=4096 00:34:00.738 fio: pid=1891224, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:00.997 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=19173376, buflen=4096 00:34:00.997 fio: pid=1891225, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:00.997 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:00.997 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:00.997 00:34:00.997 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1891224: Tue Dec 10 14:36:01 2024 00:34:00.997 read: IOPS=129, BW=518KiB/s (531kB/s)(1612KiB/3111msec) 00:34:00.997 slat (usec): min=6, max=19345, avg=97.09, stdev=1235.55 00:34:00.997 clat (usec): min=197, max=41950, avg=7566.63, stdev=15647.16 00:34:00.997 lat (usec): min=206, max=41972, avg=7663.80, stdev=15659.58 00:34:00.997 clat percentiles (usec): 00:34:00.997 | 1.00th=[ 208], 5.00th=[ 223], 10.00th=[ 233], 20.00th=[ 241], 00:34:00.997 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 249], 60.00th=[ 251], 00:34:00.997 | 70.00th=[ 255], 80.00th=[ 404], 90.00th=[41157], 95.00th=[41157], 00:34:00.997 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:34:00.997 | 99.99th=[42206] 00:34:00.997 bw ( KiB/s): min= 96, max= 2176, per=7.41%, avg=509.33, stdev=821.98, samples=6 00:34:00.997 iops : min= 24, max= 544, avg=127.33, stdev=205.50, samples=6 00:34:00.997 lat (usec) : 250=51.98%, 500=29.21% 00:34:00.997 lat (msec) : 2=0.50%, 10=0.25%, 50=17.82% 00:34:00.997 cpu : usr=0.10%, sys=0.10%, ctx=407, majf=0, minf=1 00:34:00.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:00.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.997 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.997 issued rwts: total=404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:00.997 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1891225: Tue Dec 10 14:36:01 2024 00:34:00.997 read: IOPS=1413, BW=5652KiB/s (5787kB/s)(18.3MiB/3313msec) 00:34:00.997 slat (usec): min=4, max=22486, avg=28.58, stdev=596.62 00:34:00.997 clat (usec): min=184, max=41695, avg=673.08, stdev=4108.89 00:34:00.997 lat (usec): min=191, max=41716, avg=701.67, stdev=4151.47 00:34:00.997 clat percentiles (usec): 00:34:00.997 | 1.00th=[ 192], 5.00th=[ 206], 10.00th=[ 237], 20.00th=[ 245], 00:34:00.997 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 249], 60.00th=[ 251], 00:34:00.997 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 258], 95.00th=[ 265], 00:34:00.997 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:00.997 | 99.99th=[41681] 00:34:00.997 bw ( KiB/s): min= 96, max=15520, per=74.97%, avg=5153.50, stdev=6239.65, samples=6 00:34:00.997 iops : min= 24, max= 3880, avg=1288.33, stdev=1559.90, samples=6 00:34:00.997 lat (usec) : 250=55.60%, 500=43.08%, 750=0.06%, 1000=0.02% 00:34:00.997 lat (msec) : 2=0.13%, 10=0.04%, 20=0.02%, 50=1.03% 00:34:00.997 cpu : usr=0.27%, sys=1.45%, ctx=4690, majf=0, minf=2 00:34:00.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:00.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.997 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.997 issued rwts: total=4682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:00.997 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1891226: Tue Dec 10 14:36:01 2024 00:34:00.997 read: IOPS=190, BW=759KiB/s (777kB/s)(2168KiB/2856msec) 00:34:00.997 slat (usec): min=6, max=15675, avg=64.42, stdev=904.98 00:34:00.997 clat (usec): min=192, max=45717, avg=5163.93, stdev=13293.57 00:34:00.997 lat (usec): min=200, max=45739, avg=5228.45, stdev=13309.04 00:34:00.997 clat percentiles (usec): 00:34:00.997 | 1.00th=[ 200], 5.00th=[ 231], 10.00th=[ 241], 20.00th=[ 245], 00:34:00.997 | 30.00th=[ 249], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:34:00.997 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[41157], 95.00th=[41157], 00:34:00.997 | 99.00th=[41157], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:34:00.997 | 99.99th=[45876] 00:34:00.997 bw ( KiB/s): min= 96, max= 248, per=1.91%, avg=131.20, stdev=66.12, samples=5 00:34:00.997 iops : min= 24, max= 62, avg=32.80, stdev=16.53, samples=5 00:34:00.997 lat (usec) : 250=44.01%, 500=43.09%, 750=0.37%, 1000=0.18% 00:34:00.997 lat (msec) : 2=0.18%, 50=11.97% 00:34:00.997 cpu : usr=0.04%, sys=0.28%, ctx=545, majf=0, minf=2 00:34:00.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:00.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.997 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.997 issued rwts: total=543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:00.997 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1891227: Tue Dec 10 14:36:01 2024 00:34:00.997 read: IOPS=25, BW=101KiB/s (103kB/s)(268KiB/2655msec) 00:34:00.997 slat (nsec): min=7855, max=52813, avg=20359.94, stdev=6673.31 00:34:00.997 clat (usec): min=271, max=41986, avg=39204.19, stdev=8467.78 00:34:00.997 lat (usec): min=281, max=42009, avg=39224.64, stdev=8467.00 00:34:00.997 clat percentiles (usec): 00:34:00.997 | 1.00th=[ 273], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:00.997 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:00.997 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:00.997 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:00.997 | 99.99th=[42206] 00:34:00.997 bw ( KiB/s): min= 96, max= 112, per=1.45%, avg=100.80, stdev= 7.16, samples=5 00:34:00.997 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:34:00.997 lat (usec) : 500=2.94%, 750=1.47% 00:34:00.997 lat (msec) : 50=94.12% 00:34:00.997 cpu : usr=0.11%, sys=0.00%, ctx=68, majf=0, minf=1 00:34:00.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:00.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.997 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.997 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:00.997 00:34:00.997 Run status group 0 (all jobs): 00:34:00.997 READ: bw=6874KiB/s (7038kB/s), 101KiB/s-5652KiB/s (103kB/s-5787kB/s), io=22.2MiB (23.3MB), run=2655-3313msec 00:34:00.997 00:34:00.997 Disk stats (read/write): 00:34:00.997 nvme0n1: ios=400/0, merge=0/0, ticks=2968/0, in_queue=2968, util=92.60% 00:34:00.997 nvme0n2: ios=4664/0, merge=0/0, ticks=3137/0, in_queue=3137, util=92.21% 00:34:00.997 nvme0n3: ios=361/0, merge=0/0, ticks=2750/0, in_queue=2750, util=95.01% 00:34:00.997 nvme0n4: ios=64/0, merge=0/0, ticks=2506/0, in_queue=2506, util=96.39% 00:34:01.256 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:01.256 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:01.514 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:01.514 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:01.772 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:01.772 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:01.772 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:01.772 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:02.030 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:02.030 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1891084 00:34:02.030 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:02.030 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:02.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:02.288 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:02.288 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:02.288 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:02.288 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:02.288 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:02.288 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:02.288 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:02.288 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:02.288 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:02.288 nvmf hotplug test: fio failed as expected 00:34:02.288 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:02.547 rmmod nvme_tcp 00:34:02.547 rmmod nvme_fabrics 00:34:02.547 rmmod nvme_keyring 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1888410 ']' 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1888410 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1888410 ']' 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1888410 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1888410 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1888410' 00:34:02.547 killing process with pid 1888410 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1888410 00:34:02.547 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1888410 00:34:02.807 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:02.807 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:02.807 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:02.807 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:02.807 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:02.807 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:02.807 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:02.807 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:02.807 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:02.807 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:02.807 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:02.807 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.711 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:04.969 00:34:04.969 real 0m27.465s 00:34:04.969 user 1m33.279s 00:34:04.969 sys 0m11.505s 00:34:04.969 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.969 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:04.969 ************************************ 00:34:04.969 END TEST nvmf_fio_target 00:34:04.969 ************************************ 00:34:04.969 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:04.969 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:04.969 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:04.969 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:04.969 ************************************ 00:34:04.969 START TEST nvmf_bdevio 00:34:04.969 ************************************ 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:04.970 * Looking for test storage... 00:34:04.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:04.970 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:05.229 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:05.229 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:05.229 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:05.229 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:05.229 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:05.229 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:05.229 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:05.229 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:05.229 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:05.229 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:05.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.229 --rc genhtml_branch_coverage=1 00:34:05.229 --rc genhtml_function_coverage=1 00:34:05.229 --rc genhtml_legend=1 00:34:05.229 --rc geninfo_all_blocks=1 00:34:05.229 --rc geninfo_unexecuted_blocks=1 00:34:05.229 00:34:05.229 ' 00:34:05.229 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:05.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.229 --rc genhtml_branch_coverage=1 00:34:05.230 --rc genhtml_function_coverage=1 00:34:05.230 --rc genhtml_legend=1 00:34:05.230 --rc geninfo_all_blocks=1 00:34:05.230 --rc geninfo_unexecuted_blocks=1 00:34:05.230 00:34:05.230 ' 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:05.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.230 --rc genhtml_branch_coverage=1 00:34:05.230 --rc genhtml_function_coverage=1 00:34:05.230 --rc genhtml_legend=1 00:34:05.230 --rc geninfo_all_blocks=1 00:34:05.230 --rc geninfo_unexecuted_blocks=1 00:34:05.230 00:34:05.230 ' 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:05.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.230 --rc genhtml_branch_coverage=1 00:34:05.230 --rc genhtml_function_coverage=1 00:34:05.230 --rc genhtml_legend=1 00:34:05.230 --rc geninfo_all_blocks=1 00:34:05.230 --rc geninfo_unexecuted_blocks=1 00:34:05.230 00:34:05.230 ' 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:05.230 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:11.798 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:11.798 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:11.798 Found net devices under 0000:af:00.0: cvl_0_0 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:11.798 Found net devices under 0000:af:00.1: cvl_0_1 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:11.798 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:11.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:11.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:34:11.798 00:34:11.799 --- 10.0.0.2 ping statistics --- 00:34:11.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.799 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:11.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:11.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:34:11.799 00:34:11.799 --- 10.0.0.1 ping statistics --- 00:34:11.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.799 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1896449 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1896449 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1896449 ']' 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.799 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:12.058 [2024-12-10 14:36:12.549044] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:12.058 [2024-12-10 14:36:12.549987] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:34:12.058 [2024-12-10 14:36:12.550029] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.058 [2024-12-10 14:36:12.637324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:12.058 [2024-12-10 14:36:12.676603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:12.058 [2024-12-10 14:36:12.676642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:12.058 [2024-12-10 14:36:12.676648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:12.058 [2024-12-10 14:36:12.676654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:12.058 [2024-12-10 14:36:12.676659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:12.058 [2024-12-10 14:36:12.678278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:12.058 [2024-12-10 14:36:12.678385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:12.058 [2024-12-10 14:36:12.678473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:12.058 [2024-12-10 14:36:12.678473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:12.058 [2024-12-10 14:36:12.746476] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:12.058 [2024-12-10 14:36:12.746890] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:12.058 [2024-12-10 14:36:12.747311] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:12.058 [2024-12-10 14:36:12.747513] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:12.058 [2024-12-10 14:36:12.747566] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:12.996 [2024-12-10 14:36:13.427269] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:12.996 Malloc0 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:12.996 [2024-12-10 14:36:13.507504] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:12.996 { 00:34:12.996 "params": { 00:34:12.996 "name": "Nvme$subsystem", 00:34:12.996 "trtype": "$TEST_TRANSPORT", 00:34:12.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:12.996 "adrfam": "ipv4", 00:34:12.996 "trsvcid": "$NVMF_PORT", 00:34:12.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:12.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:12.996 "hdgst": ${hdgst:-false}, 00:34:12.996 "ddgst": ${ddgst:-false} 00:34:12.996 }, 00:34:12.996 "method": "bdev_nvme_attach_controller" 00:34:12.996 } 00:34:12.996 EOF 00:34:12.996 )") 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:12.996 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:12.996 "params": { 00:34:12.996 "name": "Nvme1", 00:34:12.996 "trtype": "tcp", 00:34:12.996 "traddr": "10.0.0.2", 00:34:12.996 "adrfam": "ipv4", 00:34:12.996 "trsvcid": "4420", 00:34:12.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:12.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:12.996 "hdgst": false, 00:34:12.996 "ddgst": false 00:34:12.996 }, 00:34:12.996 "method": "bdev_nvme_attach_controller" 00:34:12.996 }' 00:34:12.996 [2024-12-10 14:36:13.556925] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:34:12.996 [2024-12-10 14:36:13.556971] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896695 ] 00:34:12.996 [2024-12-10 14:36:13.637740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:12.996 [2024-12-10 14:36:13.679784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:12.996 [2024-12-10 14:36:13.679891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.996 [2024-12-10 14:36:13.679891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:13.255 I/O targets: 00:34:13.255 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:13.255 00:34:13.255 00:34:13.255 CUnit - A unit testing framework for C - Version 2.1-3 00:34:13.255 http://cunit.sourceforge.net/ 00:34:13.255 00:34:13.255 00:34:13.255 Suite: bdevio tests on: Nvme1n1 00:34:13.255 Test: blockdev write read block ...passed 00:34:13.514 Test: blockdev write zeroes read block ...passed 00:34:13.514 Test: blockdev write zeroes read no split ...passed 00:34:13.514 Test: blockdev write zeroes read split ...passed 00:34:13.514 Test: blockdev write zeroes read split partial ...passed 00:34:13.514 Test: blockdev reset ...[2024-12-10 14:36:14.100774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:13.514 [2024-12-10 14:36:14.100839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd478b0 (9): Bad file descriptor 00:34:13.514 [2024-12-10 14:36:14.146089] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:13.514 passed 00:34:13.514 Test: blockdev write read 8 blocks ...passed 00:34:13.514 Test: blockdev write read size > 128k ...passed 00:34:13.514 Test: blockdev write read invalid size ...passed 00:34:13.514 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:13.514 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:13.514 Test: blockdev write read max offset ...passed 00:34:13.773 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:13.773 Test: blockdev writev readv 8 blocks ...passed 00:34:13.773 Test: blockdev writev readv 30 x 1block ...passed 00:34:13.773 Test: blockdev writev readv block ...passed 00:34:13.773 Test: blockdev writev readv size > 128k ...passed 00:34:13.773 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:13.773 Test: blockdev comparev and writev ...[2024-12-10 14:36:14.359137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:13.773 [2024-12-10 14:36:14.359166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:13.773 [2024-12-10 14:36:14.359180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:13.773 [2024-12-10 14:36:14.359188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:13.773 [2024-12-10 14:36:14.359487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:13.773 [2024-12-10 14:36:14.359499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:13.773 [2024-12-10 14:36:14.359511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:13.773 [2024-12-10 14:36:14.359519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:13.773 [2024-12-10 14:36:14.359798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:13.773 [2024-12-10 14:36:14.359809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:13.773 [2024-12-10 14:36:14.359821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:13.773 [2024-12-10 14:36:14.359828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:13.773 [2024-12-10 14:36:14.360108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:13.773 [2024-12-10 14:36:14.360121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:13.773 [2024-12-10 14:36:14.360132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:13.773 [2024-12-10 14:36:14.360140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:13.773 passed 00:34:13.773 Test: blockdev nvme passthru rw ...passed 00:34:13.773 Test: blockdev nvme passthru vendor specific ...[2024-12-10 14:36:14.442616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:13.773 [2024-12-10 14:36:14.442635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:13.773 [2024-12-10 14:36:14.442747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:13.773 [2024-12-10 14:36:14.442758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:13.773 [2024-12-10 14:36:14.442865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:13.773 [2024-12-10 14:36:14.442874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:13.773 [2024-12-10 14:36:14.442980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:13.773 [2024-12-10 14:36:14.442991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:13.773 passed 00:34:13.773 Test: blockdev nvme admin passthru ...passed 00:34:13.773 Test: blockdev copy ...passed 00:34:13.773 00:34:13.773 Run Summary: Type Total Ran Passed Failed Inactive 00:34:13.773 suites 1 1 n/a 0 0 00:34:13.773 tests 23 23 23 0 0 00:34:13.773 asserts 152 152 152 0 n/a 00:34:13.773 00:34:13.773 Elapsed time = 1.109 seconds 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:14.032 rmmod nvme_tcp 00:34:14.032 rmmod nvme_fabrics 00:34:14.032 rmmod nvme_keyring 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1896449 ']' 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1896449 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1896449 ']' 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1896449 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:14.032 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1896449 00:34:14.291 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:34:14.291 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:34:14.291 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1896449' 00:34:14.291 killing process with pid 1896449 00:34:14.291 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1896449 00:34:14.291 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1896449 00:34:14.291 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:14.291 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:14.291 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:14.291 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:14.291 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:14.291 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:14.292 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:14.292 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:14.292 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:14.292 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.292 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:14.292 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.825 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:16.825 00:34:16.825 real 0m11.500s 00:34:16.825 user 0m9.583s 00:34:16.825 sys 0m5.899s 00:34:16.825 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:16.825 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.825 ************************************ 00:34:16.825 END TEST nvmf_bdevio 00:34:16.825 ************************************ 00:34:16.825 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:16.825 00:34:16.825 real 4m46.739s 00:34:16.825 user 9m16.746s 00:34:16.825 sys 2m1.065s 00:34:16.825 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:16.825 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:16.825 ************************************ 00:34:16.825 END TEST nvmf_target_core_interrupt_mode 00:34:16.825 ************************************ 00:34:16.825 14:36:17 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:16.825 14:36:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:16.825 14:36:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:16.825 14:36:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:16.825 ************************************ 00:34:16.825 START TEST nvmf_interrupt 00:34:16.825 ************************************ 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:16.825 * Looking for test storage... 00:34:16.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:16.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.825 --rc genhtml_branch_coverage=1 00:34:16.825 --rc genhtml_function_coverage=1 00:34:16.825 --rc genhtml_legend=1 00:34:16.825 --rc geninfo_all_blocks=1 00:34:16.825 --rc geninfo_unexecuted_blocks=1 00:34:16.825 00:34:16.825 ' 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:16.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.825 --rc genhtml_branch_coverage=1 00:34:16.825 --rc genhtml_function_coverage=1 00:34:16.825 --rc genhtml_legend=1 00:34:16.825 --rc geninfo_all_blocks=1 00:34:16.825 --rc geninfo_unexecuted_blocks=1 00:34:16.825 00:34:16.825 ' 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:16.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.825 --rc genhtml_branch_coverage=1 00:34:16.825 --rc genhtml_function_coverage=1 00:34:16.825 --rc genhtml_legend=1 00:34:16.825 --rc geninfo_all_blocks=1 00:34:16.825 --rc geninfo_unexecuted_blocks=1 00:34:16.825 00:34:16.825 ' 00:34:16.825 14:36:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:16.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.825 --rc genhtml_branch_coverage=1 00:34:16.826 --rc genhtml_function_coverage=1 00:34:16.826 --rc genhtml_legend=1 00:34:16.826 --rc geninfo_all_blocks=1 00:34:16.826 --rc geninfo_unexecuted_blocks=1 00:34:16.826 00:34:16.826 ' 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:16.826 14:36:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:23.393 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:23.393 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:23.394 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:23.394 Found net devices under 0000:af:00.0: cvl_0_0 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:23.394 Found net devices under 0000:af:00.1: cvl_0_1 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:23.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:23.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:34:23.394 00:34:23.394 --- 10.0.0.2 ping statistics --- 00:34:23.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:23.394 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:23.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:23.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:34:23.394 00:34:23.394 --- 10.0.0.1 ping statistics --- 00:34:23.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:23.394 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:23.394 14:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:23.394 14:36:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:23.394 14:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:23.394 14:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:23.394 14:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:23.394 14:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1900722 00:34:23.394 14:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1900722 00:34:23.394 14:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:23.394 14:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1900722 ']' 00:34:23.394 14:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:23.394 14:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:23.394 14:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:23.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:23.394 14:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:23.394 14:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:23.394 [2024-12-10 14:36:24.071291] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:23.394 [2024-12-10 14:36:24.072212] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:34:23.394 [2024-12-10 14:36:24.072275] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:23.653 [2024-12-10 14:36:24.158506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:23.653 [2024-12-10 14:36:24.197011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:23.653 [2024-12-10 14:36:24.197048] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:23.653 [2024-12-10 14:36:24.197056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:23.653 [2024-12-10 14:36:24.197064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:23.653 [2024-12-10 14:36:24.197069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:23.653 [2024-12-10 14:36:24.198164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:23.653 [2024-12-10 14:36:24.198164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.653 [2024-12-10 14:36:24.265304] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:23.653 [2024-12-10 14:36:24.265751] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:23.653 [2024-12-10 14:36:24.266007] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:24.220 14:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:24.220 14:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:34:24.220 14:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:24.220 14:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:24.220 14:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:24.220 14:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:24.220 14:36:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:24.220 14:36:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:24.479 14:36:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:24.479 14:36:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:24.479 5000+0 records in 00:34:24.479 5000+0 records out 00:34:24.479 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0183597 s, 558 MB/s 00:34:24.479 14:36:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:24.479 14:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.479 14:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:24.479 AIO0 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:24.479 [2024-12-10 14:36:25.026944] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:24.479 [2024-12-10 14:36:25.055157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1900722 0 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1900722 0 idle 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1900722 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1900722 -w 256 00:34:24.479 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1900722 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.26 reactor_0' 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1900722 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.26 reactor_0 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1900722 1 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1900722 1 idle 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1900722 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1900722 -w 256 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1900726 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1900726 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1900988 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1900722 0 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1900722 0 busy 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1900722 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1900722 -w 256 00:34:24.737 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:24.996 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1900722 root 20 0 128.2g 47616 34560 R 18.8 0.0 0:00.29 reactor_0' 00:34:24.996 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1900722 root 20 0 128.2g 47616 34560 R 18.8 0.0 0:00.29 reactor_0 00:34:24.996 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:24.996 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:24.996 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=18.8 00:34:24.996 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=18 00:34:24.996 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:24.996 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:24.996 14:36:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:34:25.929 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:34:25.929 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:25.929 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1900722 -w 256 00:34:25.929 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1900722 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.64 reactor_0' 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1900722 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.64 reactor_0 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1900722 1 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1900722 1 busy 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1900722 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1900722 -w 256 00:34:26.187 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:26.445 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1900726 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:01.38 reactor_1' 00:34:26.445 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1900726 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:01.38 reactor_1 00:34:26.445 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:26.445 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:26.445 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:26.445 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:26.445 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:26.445 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:26.445 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:26.445 14:36:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:26.445 14:36:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1900988 00:34:36.411 Initializing NVMe Controllers 00:34:36.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:36.411 Controller IO queue size 256, less than required. 00:34:36.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:36.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:36.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:36.411 Initialization complete. Launching workers. 00:34:36.411 ======================================================== 00:34:36.411 Latency(us) 00:34:36.411 Device Information : IOPS MiB/s Average min max 00:34:36.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16915.00 66.07 15141.15 3565.87 31236.35 00:34:36.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 17094.80 66.78 14979.04 7777.36 28291.27 00:34:36.411 ======================================================== 00:34:36.411 Total : 34009.80 132.85 15059.67 3565.87 31236.35 00:34:36.411 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1900722 0 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1900722 0 idle 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1900722 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1900722 -w 256 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1900722 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.24 reactor_0' 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1900722 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.24 reactor_0 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1900722 1 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1900722 1 idle 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1900722 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1900722 -w 256 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1900726 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1900726 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:36.411 14:36:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:36.411 14:36:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:36.411 14:36:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:36.411 14:36:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:36.411 14:36:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:36.412 14:36:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:36.412 14:36:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:36.412 14:36:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:36.412 14:36:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:36.412 14:36:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:34:36.412 14:36:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:36.412 14:36:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:36.412 14:36:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1900722 0 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1900722 0 idle 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1900722 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1900722 -w 256 00:34:37.789 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1900722 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:20.50 reactor_0' 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1900722 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:20.50 reactor_0 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1900722 1 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1900722 1 idle 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1900722 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1900722 -w 256 00:34:38.048 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:38.308 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1900726 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.10 reactor_1' 00:34:38.308 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1900726 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.10 reactor_1 00:34:38.308 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:38.308 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:38.308 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:38.308 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:38.308 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:38.308 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:38.308 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:38.308 14:36:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:38.308 14:36:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:38.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:38.308 14:36:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:38.308 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:34:38.308 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:38.308 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:38.308 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:38.308 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:38.308 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:34:38.308 14:36:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:38.308 14:36:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:38.308 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:38.308 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:38.308 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:38.308 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:38.308 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:38.308 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:38.567 rmmod nvme_tcp 00:34:38.567 rmmod nvme_fabrics 00:34:38.567 rmmod nvme_keyring 00:34:38.567 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:38.567 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:38.567 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:38.567 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1900722 ']' 00:34:38.567 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1900722 00:34:38.567 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1900722 ']' 00:34:38.567 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1900722 00:34:38.567 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:34:38.567 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:38.567 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1900722 00:34:38.567 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:38.567 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:38.567 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1900722' 00:34:38.567 killing process with pid 1900722 00:34:38.567 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1900722 00:34:38.567 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1900722 00:34:38.826 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:38.826 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:38.826 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:38.826 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:38.826 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:34:38.826 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:38.826 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:34:38.826 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:38.826 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:38.826 14:36:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:38.826 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:38.826 14:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.730 14:36:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:40.730 00:34:40.730 real 0m24.292s 00:34:40.730 user 0m40.062s 00:34:40.730 sys 0m9.089s 00:34:40.730 14:36:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:40.730 14:36:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:40.730 ************************************ 00:34:40.731 END TEST nvmf_interrupt 00:34:40.731 ************************************ 00:34:40.990 00:34:40.990 real 28m36.195s 00:34:40.990 user 57m18.735s 00:34:40.990 sys 10m3.568s 00:34:40.990 14:36:41 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:40.990 14:36:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:40.990 ************************************ 00:34:40.990 END TEST nvmf_tcp 00:34:40.990 ************************************ 00:34:40.990 14:36:41 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:34:40.990 14:36:41 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:40.990 14:36:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:40.990 14:36:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:40.990 14:36:41 -- common/autotest_common.sh@10 -- # set +x 00:34:40.990 ************************************ 00:34:40.990 START TEST spdkcli_nvmf_tcp 00:34:40.990 ************************************ 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:40.990 * Looking for test storage... 00:34:40.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:40.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.990 --rc genhtml_branch_coverage=1 00:34:40.990 --rc genhtml_function_coverage=1 00:34:40.990 --rc genhtml_legend=1 00:34:40.990 --rc geninfo_all_blocks=1 00:34:40.990 --rc geninfo_unexecuted_blocks=1 00:34:40.990 00:34:40.990 ' 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:40.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.990 --rc genhtml_branch_coverage=1 00:34:40.990 --rc genhtml_function_coverage=1 00:34:40.990 --rc genhtml_legend=1 00:34:40.990 --rc geninfo_all_blocks=1 00:34:40.990 --rc geninfo_unexecuted_blocks=1 00:34:40.990 00:34:40.990 ' 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:40.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.990 --rc genhtml_branch_coverage=1 00:34:40.990 --rc genhtml_function_coverage=1 00:34:40.990 --rc genhtml_legend=1 00:34:40.990 --rc geninfo_all_blocks=1 00:34:40.990 --rc geninfo_unexecuted_blocks=1 00:34:40.990 00:34:40.990 ' 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:40.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.990 --rc genhtml_branch_coverage=1 00:34:40.990 --rc genhtml_function_coverage=1 00:34:40.990 --rc genhtml_legend=1 00:34:40.990 --rc geninfo_all_blocks=1 00:34:40.990 --rc geninfo_unexecuted_blocks=1 00:34:40.990 00:34:40.990 ' 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:40.990 14:36:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:41.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1903648 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1903648 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1903648 ']' 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:41.250 14:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:41.250 [2024-12-10 14:36:41.813630] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:34:41.250 [2024-12-10 14:36:41.813679] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1903648 ] 00:34:41.250 [2024-12-10 14:36:41.895853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:41.250 [2024-12-10 14:36:41.935686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:41.250 [2024-12-10 14:36:41.935687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:42.017 14:36:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:42.017 14:36:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:34:42.017 14:36:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:42.017 14:36:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:42.017 14:36:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.017 14:36:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:42.017 14:36:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:42.017 14:36:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:42.017 14:36:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:42.017 14:36:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.017 14:36:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:42.017 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:42.017 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:42.017 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:42.017 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:42.017 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:42.017 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:42.017 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:42.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:42.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:42.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:42.017 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:42.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:42.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:42.017 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:42.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:42.017 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:42.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:42.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:42.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:42.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:42.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:42.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:42.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:42.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:42.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:42.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:42.018 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:42.018 ' 00:34:45.302 [2024-12-10 14:36:45.369110] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:46.237 [2024-12-10 14:36:46.705550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:48.769 [2024-12-10 14:36:49.197193] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:50.671 [2024-12-10 14:36:51.351978] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:52.573 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:52.573 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:52.573 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:52.573 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:52.573 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:52.573 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:52.573 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:52.573 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:52.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:52.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:52.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:52.573 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:52.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:52.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:52.573 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:52.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:52.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:52.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:52.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:52.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:52.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:52.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:52.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:52.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:52.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:52.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:52.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:52.573 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:52.573 14:36:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:52.573 14:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:52.573 14:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.573 14:36:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:52.573 14:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:52.573 14:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.573 14:36:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:52.573 14:36:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:52.832 14:36:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:53.090 14:36:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:53.090 14:36:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:53.090 14:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:53.090 14:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.090 14:36:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:53.090 14:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:53.090 14:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.090 14:36:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:53.090 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:53.090 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:53.090 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:53.090 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:53.090 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:53.090 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:53.090 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:53.090 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:53.090 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:53.090 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:53.090 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:53.090 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:53.090 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:53.090 ' 00:34:59.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:59.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:59.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:59.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:59.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:59.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:59.668 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:59.668 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:59.668 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:59.668 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:59.668 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:59.668 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:59.668 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:59.668 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1903648 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1903648 ']' 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1903648 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1903648 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1903648' 00:34:59.668 killing process with pid 1903648 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1903648 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1903648 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1903648 ']' 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1903648 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1903648 ']' 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1903648 00:34:59.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1903648) - No such process 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1903648 is not found' 00:34:59.668 Process with pid 1903648 is not found 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:59.668 00:34:59.668 real 0m17.952s 00:34:59.668 user 0m39.565s 00:34:59.668 sys 0m0.829s 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:59.668 14:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:59.668 ************************************ 00:34:59.668 END TEST spdkcli_nvmf_tcp 00:34:59.668 ************************************ 00:34:59.668 14:36:59 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:59.668 14:36:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:59.668 14:36:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:59.668 14:36:59 -- common/autotest_common.sh@10 -- # set +x 00:34:59.668 ************************************ 00:34:59.668 START TEST nvmf_identify_passthru 00:34:59.668 ************************************ 00:34:59.668 14:36:59 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:59.668 * Looking for test storage... 00:34:59.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:59.668 14:36:59 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:59.668 14:36:59 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:34:59.668 14:36:59 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:59.668 14:36:59 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:59.668 14:36:59 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:34:59.668 14:36:59 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:59.668 14:36:59 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:59.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.668 --rc genhtml_branch_coverage=1 00:34:59.668 --rc genhtml_function_coverage=1 00:34:59.668 --rc genhtml_legend=1 00:34:59.668 --rc geninfo_all_blocks=1 00:34:59.668 --rc geninfo_unexecuted_blocks=1 00:34:59.668 00:34:59.668 ' 00:34:59.668 14:36:59 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:59.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.668 --rc genhtml_branch_coverage=1 00:34:59.668 --rc genhtml_function_coverage=1 00:34:59.668 --rc genhtml_legend=1 00:34:59.668 --rc geninfo_all_blocks=1 00:34:59.668 --rc geninfo_unexecuted_blocks=1 00:34:59.668 00:34:59.668 ' 00:34:59.668 14:36:59 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:59.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.668 --rc genhtml_branch_coverage=1 00:34:59.668 --rc genhtml_function_coverage=1 00:34:59.668 --rc genhtml_legend=1 00:34:59.668 --rc geninfo_all_blocks=1 00:34:59.668 --rc geninfo_unexecuted_blocks=1 00:34:59.668 00:34:59.668 ' 00:34:59.668 14:36:59 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:59.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.668 --rc genhtml_branch_coverage=1 00:34:59.668 --rc genhtml_function_coverage=1 00:34:59.668 --rc genhtml_legend=1 00:34:59.668 --rc geninfo_all_blocks=1 00:34:59.668 --rc geninfo_unexecuted_blocks=1 00:34:59.668 00:34:59.668 ' 00:34:59.668 14:36:59 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:59.668 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:59.668 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:59.668 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:59.668 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:59.668 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:59.668 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:59.668 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:59.668 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:59.668 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:59.668 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:59.668 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:59.668 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:59.669 14:36:59 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:59.669 14:36:59 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:59.669 14:36:59 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:59.669 14:36:59 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:59.669 14:36:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.669 14:36:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.669 14:36:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.669 14:36:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:59.669 14:36:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:59.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:59.669 14:36:59 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:59.669 14:36:59 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:59.669 14:36:59 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:59.669 14:36:59 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:59.669 14:36:59 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:59.669 14:36:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.669 14:36:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.669 14:36:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.669 14:36:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:59.669 14:36:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.669 14:36:59 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.669 14:36:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:59.669 14:36:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:59.669 14:36:59 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:34:59.669 14:36:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:06.236 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:06.237 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:06.237 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:06.237 Found net devices under 0000:af:00.0: cvl_0_0 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:06.237 Found net devices under 0000:af:00.1: cvl_0_1 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:06.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:06.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:35:06.237 00:35:06.237 --- 10.0.0.2 ping statistics --- 00:35:06.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.237 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:06.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:06.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:35:06.237 00:35:06.237 --- 10.0.0.1 ping statistics --- 00:35:06.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.237 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:06.237 14:37:06 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:06.237 14:37:06 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:06.237 14:37:06 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:06.237 14:37:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:06.237 14:37:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:06.237 14:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:06.237 14:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:06.237 14:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:06.237 14:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:06.237 14:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:06.237 14:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:35:06.237 14:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:06.237 14:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:06.237 14:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:06.237 14:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:35:06.237 14:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:35:06.237 14:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:35:06.237 14:37:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:35:06.237 14:37:06 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:35:06.237 14:37:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:06.237 14:37:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:35:06.237 14:37:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:10.425 14:37:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ807001JM1P0FGN 00:35:10.425 14:37:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:35:10.425 14:37:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:10.425 14:37:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:14.613 14:37:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:14.613 14:37:14 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:14.613 14:37:14 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:14.613 14:37:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.613 14:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:14.613 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:14.613 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.613 14:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:14.613 14:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1911345 00:35:14.613 14:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:14.613 14:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1911345 00:35:14.613 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1911345 ']' 00:35:14.613 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:14.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.614 [2024-12-10 14:37:15.052070] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:35:14.614 [2024-12-10 14:37:15.052116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:14.614 [2024-12-10 14:37:15.133487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:14.614 [2024-12-10 14:37:15.175516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:14.614 [2024-12-10 14:37:15.175554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:14.614 [2024-12-10 14:37:15.175561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:14.614 [2024-12-10 14:37:15.175567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:14.614 [2024-12-10 14:37:15.175572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:14.614 [2024-12-10 14:37:15.177021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:14.614 [2024-12-10 14:37:15.177051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:14.614 [2024-12-10 14:37:15.177161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.614 [2024-12-10 14:37:15.177162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:35:14.614 14:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.614 INFO: Log level set to 20 00:35:14.614 INFO: Requests: 00:35:14.614 { 00:35:14.614 "jsonrpc": "2.0", 00:35:14.614 "method": "nvmf_set_config", 00:35:14.614 "id": 1, 00:35:14.614 "params": { 00:35:14.614 "admin_cmd_passthru": { 00:35:14.614 "identify_ctrlr": true 00:35:14.614 } 00:35:14.614 } 00:35:14.614 } 00:35:14.614 00:35:14.614 INFO: response: 00:35:14.614 { 00:35:14.614 "jsonrpc": "2.0", 00:35:14.614 "id": 1, 00:35:14.614 "result": true 00:35:14.614 } 00:35:14.614 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.614 14:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.614 INFO: Setting log level to 20 00:35:14.614 INFO: Setting log level to 20 00:35:14.614 INFO: Log level set to 20 00:35:14.614 INFO: Log level set to 20 00:35:14.614 INFO: Requests: 00:35:14.614 { 00:35:14.614 "jsonrpc": "2.0", 00:35:14.614 "method": "framework_start_init", 00:35:14.614 "id": 1 00:35:14.614 } 00:35:14.614 00:35:14.614 INFO: Requests: 00:35:14.614 { 00:35:14.614 "jsonrpc": "2.0", 00:35:14.614 "method": "framework_start_init", 00:35:14.614 "id": 1 00:35:14.614 } 00:35:14.614 00:35:14.614 [2024-12-10 14:37:15.293882] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:14.614 INFO: response: 00:35:14.614 { 00:35:14.614 "jsonrpc": "2.0", 00:35:14.614 "id": 1, 00:35:14.614 "result": true 00:35:14.614 } 00:35:14.614 00:35:14.614 INFO: response: 00:35:14.614 { 00:35:14.614 "jsonrpc": "2.0", 00:35:14.614 "id": 1, 00:35:14.614 "result": true 00:35:14.614 } 00:35:14.614 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.614 14:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.614 INFO: Setting log level to 40 00:35:14.614 INFO: Setting log level to 40 00:35:14.614 INFO: Setting log level to 40 00:35:14.614 [2024-12-10 14:37:15.307142] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.614 14:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.614 14:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.614 14:37:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.898 Nvme0n1 00:35:17.898 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.898 14:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:17.899 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.899 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.899 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.899 14:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:17.899 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.899 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.899 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.899 14:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:17.899 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.899 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.899 [2024-12-10 14:37:18.222365] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:17.899 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.899 14:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:17.899 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.899 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.899 [ 00:35:17.899 { 00:35:17.899 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:17.899 "subtype": "Discovery", 00:35:17.899 "listen_addresses": [], 00:35:17.899 "allow_any_host": true, 00:35:17.899 "hosts": [] 00:35:17.899 }, 00:35:17.899 { 00:35:17.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:17.899 "subtype": "NVMe", 00:35:17.899 "listen_addresses": [ 00:35:17.899 { 00:35:17.899 "trtype": "TCP", 00:35:17.899 "adrfam": "IPv4", 00:35:17.899 "traddr": "10.0.0.2", 00:35:17.899 "trsvcid": "4420" 00:35:17.899 } 00:35:17.899 ], 00:35:17.899 "allow_any_host": true, 00:35:17.899 "hosts": [], 00:35:17.899 "serial_number": "SPDK00000000000001", 00:35:17.899 "model_number": "SPDK bdev Controller", 00:35:17.899 "max_namespaces": 1, 00:35:17.899 "min_cntlid": 1, 00:35:17.899 "max_cntlid": 65519, 00:35:17.899 "namespaces": [ 00:35:17.899 { 00:35:17.899 "nsid": 1, 00:35:17.899 "bdev_name": "Nvme0n1", 00:35:17.899 "name": "Nvme0n1", 00:35:17.899 "nguid": "6846679E14C148D3A142C7C052AD058C", 00:35:17.899 "uuid": "6846679e-14c1-48d3-a142-c7c052ad058c" 00:35:17.899 } 00:35:17.899 ] 00:35:17.899 } 00:35:17.899 ] 00:35:17.899 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.899 14:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:17.899 14:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:17.899 14:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:17.899 14:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ807001JM1P0FGN 00:35:17.899 14:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:17.899 14:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:17.899 14:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:17.899 14:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:17.899 14:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ807001JM1P0FGN '!=' BTLJ807001JM1P0FGN ']' 00:35:17.899 14:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:17.899 14:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:17.899 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.899 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.899 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.899 14:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:17.899 14:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:17.899 14:37:18 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:17.899 14:37:18 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:17.899 14:37:18 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:17.899 14:37:18 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:17.899 14:37:18 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:17.899 14:37:18 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:17.899 rmmod nvme_tcp 00:35:18.157 rmmod nvme_fabrics 00:35:18.157 rmmod nvme_keyring 00:35:18.157 14:37:18 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:18.157 14:37:18 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:18.157 14:37:18 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:18.157 14:37:18 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1911345 ']' 00:35:18.157 14:37:18 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1911345 00:35:18.157 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1911345 ']' 00:35:18.157 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1911345 00:35:18.157 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:35:18.157 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:18.157 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1911345 00:35:18.157 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:18.157 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:18.157 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1911345' 00:35:18.158 killing process with pid 1911345 00:35:18.158 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1911345 00:35:18.158 14:37:18 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1911345 00:35:19.532 14:37:20 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:19.532 14:37:20 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:19.532 14:37:20 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:19.532 14:37:20 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:19.532 14:37:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:19.532 14:37:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:19.532 14:37:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:19.532 14:37:20 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:19.532 14:37:20 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:19.532 14:37:20 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.532 14:37:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:19.532 14:37:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:22.065 14:37:22 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:22.065 00:35:22.065 real 0m22.675s 00:35:22.065 user 0m26.974s 00:35:22.065 sys 0m6.761s 00:35:22.065 14:37:22 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:22.065 14:37:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:22.065 ************************************ 00:35:22.065 END TEST nvmf_identify_passthru 00:35:22.065 ************************************ 00:35:22.065 14:37:22 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:22.065 14:37:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:22.065 14:37:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:22.065 14:37:22 -- common/autotest_common.sh@10 -- # set +x 00:35:22.065 ************************************ 00:35:22.065 START TEST nvmf_dif 00:35:22.065 ************************************ 00:35:22.065 14:37:22 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:22.065 * Looking for test storage... 00:35:22.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:22.065 14:37:22 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:22.065 14:37:22 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:35:22.065 14:37:22 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:22.065 14:37:22 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:22.065 14:37:22 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:22.065 14:37:22 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:22.065 14:37:22 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:22.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.065 --rc genhtml_branch_coverage=1 00:35:22.065 --rc genhtml_function_coverage=1 00:35:22.065 --rc genhtml_legend=1 00:35:22.065 --rc geninfo_all_blocks=1 00:35:22.065 --rc geninfo_unexecuted_blocks=1 00:35:22.065 00:35:22.065 ' 00:35:22.065 14:37:22 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:22.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.065 --rc genhtml_branch_coverage=1 00:35:22.065 --rc genhtml_function_coverage=1 00:35:22.065 --rc genhtml_legend=1 00:35:22.065 --rc geninfo_all_blocks=1 00:35:22.065 --rc geninfo_unexecuted_blocks=1 00:35:22.065 00:35:22.065 ' 00:35:22.065 14:37:22 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:22.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.065 --rc genhtml_branch_coverage=1 00:35:22.065 --rc genhtml_function_coverage=1 00:35:22.065 --rc genhtml_legend=1 00:35:22.065 --rc geninfo_all_blocks=1 00:35:22.065 --rc geninfo_unexecuted_blocks=1 00:35:22.065 00:35:22.065 ' 00:35:22.065 14:37:22 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:22.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.065 --rc genhtml_branch_coverage=1 00:35:22.065 --rc genhtml_function_coverage=1 00:35:22.065 --rc genhtml_legend=1 00:35:22.065 --rc geninfo_all_blocks=1 00:35:22.065 --rc geninfo_unexecuted_blocks=1 00:35:22.065 00:35:22.065 ' 00:35:22.065 14:37:22 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:22.065 14:37:22 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:22.065 14:37:22 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:22.065 14:37:22 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:22.065 14:37:22 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:22.066 14:37:22 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:22.066 14:37:22 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:22.066 14:37:22 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:22.066 14:37:22 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:22.066 14:37:22 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.066 14:37:22 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.066 14:37:22 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.066 14:37:22 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:22.066 14:37:22 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:22.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:22.066 14:37:22 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:22.066 14:37:22 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:22.066 14:37:22 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:22.066 14:37:22 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:22.066 14:37:22 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.066 14:37:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:22.066 14:37:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:22.066 14:37:22 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:22.066 14:37:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:28.634 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:28.634 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:28.634 Found net devices under 0000:af:00.0: cvl_0_0 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:28.634 Found net devices under 0000:af:00.1: cvl_0_1 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:28.634 14:37:28 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:28.634 14:37:29 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:28.634 14:37:29 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:28.634 14:37:29 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:28.634 14:37:29 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:28.634 14:37:29 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:28.634 14:37:29 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:28.634 14:37:29 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:28.634 14:37:29 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:28.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:28.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:35:28.634 00:35:28.634 --- 10.0.0.2 ping statistics --- 00:35:28.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:28.634 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:35:28.634 14:37:29 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:28.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:28.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:35:28.634 00:35:28.634 --- 10.0.0.1 ping statistics --- 00:35:28.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:28.634 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:35:28.634 14:37:29 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:28.634 14:37:29 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:35:28.634 14:37:29 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:28.634 14:37:29 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:31.923 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:35:31.923 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:35:31.923 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:31.923 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:35:31.923 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:35:31.923 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:35:31.923 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:35:31.923 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:35:31.923 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:35:31.923 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:35:31.923 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:35:31.923 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:35:31.923 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:35:31.923 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:35:31.923 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:35:31.923 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:35:31.923 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:35:31.923 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:35:31.923 14:37:32 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:31.923 14:37:32 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:31.923 14:37:32 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:31.923 14:37:32 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:31.923 14:37:32 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:31.923 14:37:32 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:31.923 14:37:32 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:31.923 14:37:32 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:31.923 14:37:32 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:31.923 14:37:32 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:31.923 14:37:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:31.923 14:37:32 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1917393 00:35:31.923 14:37:32 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1917393 00:35:31.923 14:37:32 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:31.923 14:37:32 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1917393 ']' 00:35:31.923 14:37:32 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:31.923 14:37:32 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:31.923 14:37:32 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:31.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:31.923 14:37:32 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:31.923 14:37:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:31.923 [2024-12-10 14:37:32.656829] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:35:31.923 [2024-12-10 14:37:32.656876] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.182 [2024-12-10 14:37:32.742630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.182 [2024-12-10 14:37:32.782517] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:32.182 [2024-12-10 14:37:32.782551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:32.182 [2024-12-10 14:37:32.782558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:32.182 [2024-12-10 14:37:32.782564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:32.182 [2024-12-10 14:37:32.782569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:32.182 [2024-12-10 14:37:32.783088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.182 14:37:32 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:32.182 14:37:32 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:35:32.182 14:37:32 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:32.182 14:37:32 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:32.182 14:37:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:32.182 14:37:32 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:32.182 14:37:32 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:32.441 14:37:32 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:32.441 14:37:32 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.441 14:37:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:32.441 [2024-12-10 14:37:32.925910] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:32.441 14:37:32 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.441 14:37:32 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:32.441 14:37:32 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:32.441 14:37:32 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:32.441 14:37:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:32.441 ************************************ 00:35:32.441 START TEST fio_dif_1_default 00:35:32.441 ************************************ 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:32.441 bdev_null0 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.441 14:37:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:32.442 14:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.442 14:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:32.442 [2024-12-10 14:37:33.002238] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:32.442 { 00:35:32.442 "params": { 00:35:32.442 "name": "Nvme$subsystem", 00:35:32.442 "trtype": "$TEST_TRANSPORT", 00:35:32.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:32.442 "adrfam": "ipv4", 00:35:32.442 "trsvcid": "$NVMF_PORT", 00:35:32.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:32.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:32.442 "hdgst": ${hdgst:-false}, 00:35:32.442 "ddgst": ${ddgst:-false} 00:35:32.442 }, 00:35:32.442 "method": "bdev_nvme_attach_controller" 00:35:32.442 } 00:35:32.442 EOF 00:35:32.442 )") 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:32.442 "params": { 00:35:32.442 "name": "Nvme0", 00:35:32.442 "trtype": "tcp", 00:35:32.442 "traddr": "10.0.0.2", 00:35:32.442 "adrfam": "ipv4", 00:35:32.442 "trsvcid": "4420", 00:35:32.442 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:32.442 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:32.442 "hdgst": false, 00:35:32.442 "ddgst": false 00:35:32.442 }, 00:35:32.442 "method": "bdev_nvme_attach_controller" 00:35:32.442 }' 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:32.442 14:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.701 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:32.701 fio-3.35 00:35:32.701 Starting 1 thread 00:35:44.905 00:35:44.905 filename0: (groupid=0, jobs=1): err= 0: pid=1917758: Tue Dec 10 14:37:43 2024 00:35:44.905 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10013msec) 00:35:44.905 slat (nsec): min=5532, max=26554, avg=6223.36, stdev=1255.27 00:35:44.905 clat (usec): min=397, max=45692, avg=41018.23, stdev=2650.99 00:35:44.905 lat (usec): min=403, max=45719, avg=41024.46, stdev=2651.03 00:35:44.905 clat percentiles (usec): 00:35:44.905 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:35:44.905 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:44.905 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:35:44.905 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:35:44.905 | 99.99th=[45876] 00:35:44.905 bw ( KiB/s): min= 384, max= 416, per=99.51%, avg=388.80, stdev=11.72, samples=20 00:35:44.905 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:44.905 lat (usec) : 500=0.41% 00:35:44.905 lat (msec) : 50=99.59% 00:35:44.905 cpu : usr=92.77%, sys=6.94%, ctx=18, majf=0, minf=0 00:35:44.905 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:44.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.905 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.905 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:44.905 00:35:44.905 Run status group 0 (all jobs): 00:35:44.905 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10013-10013msec 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.905 00:35:44.905 real 0m11.106s 00:35:44.905 user 0m15.924s 00:35:44.905 sys 0m0.968s 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:44.905 ************************************ 00:35:44.905 END TEST fio_dif_1_default 00:35:44.905 ************************************ 00:35:44.905 14:37:44 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:44.905 14:37:44 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:44.905 14:37:44 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:44.905 14:37:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:44.905 ************************************ 00:35:44.905 START TEST fio_dif_1_multi_subsystems 00:35:44.905 ************************************ 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.905 bdev_null0 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.905 [2024-12-10 14:37:44.181274] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.905 bdev_null1 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.905 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:44.906 { 00:35:44.906 "params": { 00:35:44.906 "name": "Nvme$subsystem", 00:35:44.906 "trtype": "$TEST_TRANSPORT", 00:35:44.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:44.906 "adrfam": "ipv4", 00:35:44.906 "trsvcid": "$NVMF_PORT", 00:35:44.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:44.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:44.906 "hdgst": ${hdgst:-false}, 00:35:44.906 "ddgst": ${ddgst:-false} 00:35:44.906 }, 00:35:44.906 "method": "bdev_nvme_attach_controller" 00:35:44.906 } 00:35:44.906 EOF 00:35:44.906 )") 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:44.906 { 00:35:44.906 "params": { 00:35:44.906 "name": "Nvme$subsystem", 00:35:44.906 "trtype": "$TEST_TRANSPORT", 00:35:44.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:44.906 "adrfam": "ipv4", 00:35:44.906 "trsvcid": "$NVMF_PORT", 00:35:44.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:44.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:44.906 "hdgst": ${hdgst:-false}, 00:35:44.906 "ddgst": ${ddgst:-false} 00:35:44.906 }, 00:35:44.906 "method": "bdev_nvme_attach_controller" 00:35:44.906 } 00:35:44.906 EOF 00:35:44.906 )") 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:44.906 "params": { 00:35:44.906 "name": "Nvme0", 00:35:44.906 "trtype": "tcp", 00:35:44.906 "traddr": "10.0.0.2", 00:35:44.906 "adrfam": "ipv4", 00:35:44.906 "trsvcid": "4420", 00:35:44.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:44.906 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:44.906 "hdgst": false, 00:35:44.906 "ddgst": false 00:35:44.906 }, 00:35:44.906 "method": "bdev_nvme_attach_controller" 00:35:44.906 },{ 00:35:44.906 "params": { 00:35:44.906 "name": "Nvme1", 00:35:44.906 "trtype": "tcp", 00:35:44.906 "traddr": "10.0.0.2", 00:35:44.906 "adrfam": "ipv4", 00:35:44.906 "trsvcid": "4420", 00:35:44.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:44.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:44.906 "hdgst": false, 00:35:44.906 "ddgst": false 00:35:44.906 }, 00:35:44.906 "method": "bdev_nvme_attach_controller" 00:35:44.906 }' 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:44.906 14:37:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:44.906 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:44.906 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:44.906 fio-3.35 00:35:44.906 Starting 2 threads 00:35:54.881 00:35:54.881 filename0: (groupid=0, jobs=1): err= 0: pid=1919701: Tue Dec 10 14:37:55 2024 00:35:54.881 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10006msec) 00:35:54.881 slat (nsec): min=6002, max=61711, avg=11846.99, stdev=9618.58 00:35:54.881 clat (usec): min=40780, max=42475, avg=41306.11, stdev=479.34 00:35:54.881 lat (usec): min=40786, max=42521, avg=41317.95, stdev=479.22 00:35:54.881 clat percentiles (usec): 00:35:54.881 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:35:54.881 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:54.881 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:54.881 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:54.881 | 99.99th=[42730] 00:35:54.881 bw ( KiB/s): min= 352, max= 416, per=29.90%, avg=385.60, stdev=12.61, samples=20 00:35:54.881 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:35:54.881 lat (msec) : 50=100.00% 00:35:54.881 cpu : usr=97.39%, sys=2.33%, ctx=10, majf=0, minf=118 00:35:54.881 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:54.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.881 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.881 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:54.881 filename1: (groupid=0, jobs=1): err= 0: pid=1919702: Tue Dec 10 14:37:55 2024 00:35:54.881 read: IOPS=225, BW=901KiB/s (923kB/s)(9040KiB/10029msec) 00:35:54.881 slat (nsec): min=6021, max=54999, avg=8820.12, stdev=6037.03 00:35:54.881 clat (usec): min=371, max=42637, avg=17723.15, stdev=20334.66 00:35:54.881 lat (usec): min=377, max=42644, avg=17731.97, stdev=20333.15 00:35:54.881 clat percentiles (usec): 00:35:54.881 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 416], 20.00th=[ 429], 00:35:54.881 | 30.00th=[ 437], 40.00th=[ 445], 50.00th=[ 461], 60.00th=[40633], 00:35:54.881 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42730], 00:35:54.881 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:35:54.881 | 99.99th=[42730] 00:35:54.881 bw ( KiB/s): min= 704, max= 1280, per=70.06%, avg=902.40, stdev=158.00, samples=20 00:35:54.881 iops : min= 176, max= 320, avg=225.60, stdev=39.50, samples=20 00:35:54.881 lat (usec) : 500=54.91%, 750=2.92%, 1000=0.13% 00:35:54.881 lat (msec) : 2=0.09%, 50=41.95% 00:35:54.881 cpu : usr=98.84%, sys=0.86%, ctx=32, majf=0, minf=124 00:35:54.881 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:54.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.881 issued rwts: total=2260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.881 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:54.881 00:35:54.881 Run status group 0 (all jobs): 00:35:54.881 READ: bw=1287KiB/s (1318kB/s), 387KiB/s-901KiB/s (396kB/s-923kB/s), io=12.6MiB (13.2MB), run=10006-10029msec 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.881 00:35:54.881 real 0m11.469s 00:35:54.881 user 0m26.460s 00:35:54.881 sys 0m0.677s 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:54.881 14:37:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.881 ************************************ 00:35:54.881 END TEST fio_dif_1_multi_subsystems 00:35:54.881 ************************************ 00:35:55.140 14:37:55 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:55.140 14:37:55 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:55.140 14:37:55 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:55.140 14:37:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:55.140 ************************************ 00:35:55.140 START TEST fio_dif_rand_params 00:35:55.140 ************************************ 00:35:55.140 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:35:55.140 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:55.140 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:55.140 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:55.140 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:55.140 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:55.140 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:55.140 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:55.140 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:55.140 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:55.140 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:55.140 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:55.140 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.141 bdev_null0 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.141 [2024-12-10 14:37:55.727691] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:55.141 { 00:35:55.141 "params": { 00:35:55.141 "name": "Nvme$subsystem", 00:35:55.141 "trtype": "$TEST_TRANSPORT", 00:35:55.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.141 "adrfam": "ipv4", 00:35:55.141 "trsvcid": "$NVMF_PORT", 00:35:55.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.141 "hdgst": ${hdgst:-false}, 00:35:55.141 "ddgst": ${ddgst:-false} 00:35:55.141 }, 00:35:55.141 "method": "bdev_nvme_attach_controller" 00:35:55.141 } 00:35:55.141 EOF 00:35:55.141 )") 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:55.141 "params": { 00:35:55.141 "name": "Nvme0", 00:35:55.141 "trtype": "tcp", 00:35:55.141 "traddr": "10.0.0.2", 00:35:55.141 "adrfam": "ipv4", 00:35:55.141 "trsvcid": "4420", 00:35:55.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:55.141 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:55.141 "hdgst": false, 00:35:55.141 "ddgst": false 00:35:55.141 }, 00:35:55.141 "method": "bdev_nvme_attach_controller" 00:35:55.141 }' 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:55.141 14:37:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.400 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:55.400 ... 00:35:55.400 fio-3.35 00:35:55.400 Starting 3 threads 00:36:02.018 00:36:02.018 filename0: (groupid=0, jobs=1): err= 0: pid=1921646: Tue Dec 10 14:38:01 2024 00:36:02.018 read: IOPS=308, BW=38.6MiB/s (40.4MB/s)(193MiB/5007msec) 00:36:02.018 slat (nsec): min=6138, max=27082, avg=10998.55, stdev=2294.54 00:36:02.018 clat (usec): min=3461, max=51202, avg=9706.11, stdev=7600.95 00:36:02.018 lat (usec): min=3467, max=51216, avg=9717.11, stdev=7600.94 00:36:02.018 clat percentiles (usec): 00:36:02.018 | 1.00th=[ 3982], 5.00th=[ 5866], 10.00th=[ 6259], 20.00th=[ 7046], 00:36:02.018 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:36:02.018 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10159], 00:36:02.018 | 99.00th=[50070], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:36:02.018 | 99.99th=[51119] 00:36:02.018 bw ( KiB/s): min=28672, max=48640, per=33.79%, avg=39500.80, stdev=6028.00, samples=10 00:36:02.018 iops : min= 224, max= 380, avg=308.60, stdev=47.09, samples=10 00:36:02.018 lat (msec) : 4=1.04%, 10=91.91%, 20=3.56%, 50=2.98%, 100=0.52% 00:36:02.018 cpu : usr=94.53%, sys=5.09%, ctx=11, majf=0, minf=36 00:36:02.018 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:02.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.019 issued rwts: total=1545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:02.019 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:02.019 filename0: (groupid=0, jobs=1): err= 0: pid=1921647: Tue Dec 10 14:38:01 2024 00:36:02.019 read: IOPS=295, BW=37.0MiB/s (38.8MB/s)(187MiB/5044msec) 00:36:02.019 slat (nsec): min=6128, max=24594, avg=10780.48, stdev=2280.14 00:36:02.019 clat (usec): min=3227, max=51754, avg=10101.42, stdev=7630.95 00:36:02.019 lat (usec): min=3234, max=51762, avg=10112.20, stdev=7630.82 00:36:02.019 clat percentiles (usec): 00:36:02.019 | 1.00th=[ 3851], 5.00th=[ 5669], 10.00th=[ 6390], 20.00th=[ 7177], 00:36:02.019 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:36:02.019 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10552], 95.00th=[11207], 00:36:02.019 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51119], 99.95th=[51643], 00:36:02.019 | 99.99th=[51643] 00:36:02.019 bw ( KiB/s): min=32000, max=44288, per=32.61%, avg=38118.40, stdev=4165.54, samples=10 00:36:02.019 iops : min= 250, max= 346, avg=297.80, stdev=32.54, samples=10 00:36:02.019 lat (msec) : 4=1.47%, 10=78.95%, 20=16.02%, 50=2.82%, 100=0.74% 00:36:02.019 cpu : usr=94.25%, sys=5.45%, ctx=11, majf=0, minf=39 00:36:02.019 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:02.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.019 issued rwts: total=1492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:02.019 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:02.019 filename0: (groupid=0, jobs=1): err= 0: pid=1921648: Tue Dec 10 14:38:01 2024 00:36:02.019 read: IOPS=313, BW=39.2MiB/s (41.1MB/s)(196MiB/5004msec) 00:36:02.019 slat (nsec): min=6119, max=24199, avg=10662.07, stdev=1908.52 00:36:02.019 clat (usec): min=3490, max=50400, avg=9553.68, stdev=4874.50 00:36:02.019 lat (usec): min=3497, max=50412, avg=9564.34, stdev=4874.56 00:36:02.019 clat percentiles (usec): 00:36:02.019 | 1.00th=[ 3818], 5.00th=[ 5866], 10.00th=[ 6325], 20.00th=[ 6783], 00:36:02.019 | 30.00th=[ 7635], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[ 9896], 00:36:02.019 | 70.00th=[10421], 80.00th=[11076], 90.00th=[11731], 95.00th=[12125], 00:36:02.019 | 99.00th=[45876], 99.50th=[46924], 99.90th=[50070], 99.95th=[50594], 00:36:02.019 | 99.99th=[50594] 00:36:02.019 bw ( KiB/s): min=35584, max=43776, per=34.32%, avg=40115.20, stdev=2425.78, samples=10 00:36:02.019 iops : min= 278, max= 342, avg=313.40, stdev=18.95, samples=10 00:36:02.019 lat (msec) : 4=1.91%, 10=59.91%, 20=36.84%, 50=1.21%, 100=0.13% 00:36:02.019 cpu : usr=94.34%, sys=5.36%, ctx=10, majf=0, minf=69 00:36:02.019 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:02.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.019 issued rwts: total=1569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:02.019 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:02.019 00:36:02.019 Run status group 0 (all jobs): 00:36:02.019 READ: bw=114MiB/s (120MB/s), 37.0MiB/s-39.2MiB/s (38.8MB/s-41.1MB/s), io=576MiB (604MB), run=5004-5044msec 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.019 14:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.019 bdev_null0 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.019 [2024-12-10 14:38:02.022294] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.019 bdev_null1 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.019 bdev_null2 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:02.019 14:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:02.020 { 00:36:02.020 "params": { 00:36:02.020 "name": "Nvme$subsystem", 00:36:02.020 "trtype": "$TEST_TRANSPORT", 00:36:02.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.020 "adrfam": "ipv4", 00:36:02.020 "trsvcid": "$NVMF_PORT", 00:36:02.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.020 "hdgst": ${hdgst:-false}, 00:36:02.020 "ddgst": ${ddgst:-false} 00:36:02.020 }, 00:36:02.020 "method": "bdev_nvme_attach_controller" 00:36:02.020 } 00:36:02.020 EOF 00:36:02.020 )") 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:02.020 { 00:36:02.020 "params": { 00:36:02.020 "name": "Nvme$subsystem", 00:36:02.020 "trtype": "$TEST_TRANSPORT", 00:36:02.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.020 "adrfam": "ipv4", 00:36:02.020 "trsvcid": "$NVMF_PORT", 00:36:02.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.020 "hdgst": ${hdgst:-false}, 00:36:02.020 "ddgst": ${ddgst:-false} 00:36:02.020 }, 00:36:02.020 "method": "bdev_nvme_attach_controller" 00:36:02.020 } 00:36:02.020 EOF 00:36:02.020 )") 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:02.020 { 00:36:02.020 "params": { 00:36:02.020 "name": "Nvme$subsystem", 00:36:02.020 "trtype": "$TEST_TRANSPORT", 00:36:02.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.020 "adrfam": "ipv4", 00:36:02.020 "trsvcid": "$NVMF_PORT", 00:36:02.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.020 "hdgst": ${hdgst:-false}, 00:36:02.020 "ddgst": ${ddgst:-false} 00:36:02.020 }, 00:36:02.020 "method": "bdev_nvme_attach_controller" 00:36:02.020 } 00:36:02.020 EOF 00:36:02.020 )") 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:02.020 "params": { 00:36:02.020 "name": "Nvme0", 00:36:02.020 "trtype": "tcp", 00:36:02.020 "traddr": "10.0.0.2", 00:36:02.020 "adrfam": "ipv4", 00:36:02.020 "trsvcid": "4420", 00:36:02.020 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:02.020 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:02.020 "hdgst": false, 00:36:02.020 "ddgst": false 00:36:02.020 }, 00:36:02.020 "method": "bdev_nvme_attach_controller" 00:36:02.020 },{ 00:36:02.020 "params": { 00:36:02.020 "name": "Nvme1", 00:36:02.020 "trtype": "tcp", 00:36:02.020 "traddr": "10.0.0.2", 00:36:02.020 "adrfam": "ipv4", 00:36:02.020 "trsvcid": "4420", 00:36:02.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:02.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:02.020 "hdgst": false, 00:36:02.020 "ddgst": false 00:36:02.020 }, 00:36:02.020 "method": "bdev_nvme_attach_controller" 00:36:02.020 },{ 00:36:02.020 "params": { 00:36:02.020 "name": "Nvme2", 00:36:02.020 "trtype": "tcp", 00:36:02.020 "traddr": "10.0.0.2", 00:36:02.020 "adrfam": "ipv4", 00:36:02.020 "trsvcid": "4420", 00:36:02.020 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:02.020 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:02.020 "hdgst": false, 00:36:02.020 "ddgst": false 00:36:02.020 }, 00:36:02.020 "method": "bdev_nvme_attach_controller" 00:36:02.020 }' 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:02.020 14:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:02.020 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:02.020 ... 00:36:02.020 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:02.020 ... 00:36:02.020 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:02.020 ... 00:36:02.020 fio-3.35 00:36:02.020 Starting 24 threads 00:36:14.380 00:36:14.380 filename0: (groupid=0, jobs=1): err= 0: pid=1922808: Tue Dec 10 14:38:13 2024 00:36:14.380 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.2MiB/10007msec) 00:36:14.380 slat (usec): min=6, max=112, avg=44.99, stdev=20.64 00:36:14.380 clat (usec): min=14009, max=31945, avg=26593.25, stdev=2129.26 00:36:14.380 lat (usec): min=14027, max=31969, avg=26638.24, stdev=2132.59 00:36:14.380 clat percentiles (usec): 00:36:14.380 | 1.00th=[23987], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:36:14.380 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26870], 00:36:14.380 | 70.00th=[27395], 80.00th=[28443], 90.00th=[30016], 95.00th=[30540], 00:36:14.380 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:36:14.380 | 99.99th=[31851] 00:36:14.380 bw ( KiB/s): min= 2171, max= 2688, per=4.16%, avg=2383.53, stdev=142.91, samples=19 00:36:14.380 iops : min= 542, max= 672, avg=595.68, stdev=35.74, samples=19 00:36:14.380 lat (msec) : 20=0.54%, 50=99.46% 00:36:14.380 cpu : usr=98.80%, sys=0.73%, ctx=37, majf=0, minf=9 00:36:14.380 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:14.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.380 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.380 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.380 filename0: (groupid=0, jobs=1): err= 0: pid=1922809: Tue Dec 10 14:38:13 2024 00:36:14.380 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10004msec) 00:36:14.380 slat (usec): min=3, max=105, avg=47.45, stdev=17.26 00:36:14.380 clat (usec): min=18950, max=33510, avg=26603.95, stdev=2040.69 00:36:14.380 lat (usec): min=18959, max=33522, avg=26651.40, stdev=2042.95 00:36:14.380 clat percentiles (usec): 00:36:14.380 | 1.00th=[23987], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:36:14.380 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26870], 00:36:14.380 | 70.00th=[27395], 80.00th=[28443], 90.00th=[30016], 95.00th=[30540], 00:36:14.380 | 99.00th=[31327], 99.50th=[31589], 99.90th=[33424], 99.95th=[33424], 00:36:14.380 | 99.99th=[33424] 00:36:14.380 bw ( KiB/s): min= 2176, max= 2688, per=4.14%, avg=2370.47, stdev=136.68, samples=19 00:36:14.380 iops : min= 544, max= 672, avg=592.47, stdev=34.08, samples=19 00:36:14.380 lat (msec) : 20=0.27%, 50=99.73% 00:36:14.380 cpu : usr=98.73%, sys=0.85%, ctx=39, majf=0, minf=9 00:36:14.380 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:14.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.380 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.380 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.380 filename0: (groupid=0, jobs=1): err= 0: pid=1922810: Tue Dec 10 14:38:13 2024 00:36:14.380 read: IOPS=592, BW=2370KiB/s (2427kB/s)(23.2MiB/10018msec) 00:36:14.380 slat (nsec): min=5874, max=97586, avg=46637.10, stdev=15988.37 00:36:14.380 clat (usec): min=17678, max=31746, avg=26609.32, stdev=2065.92 00:36:14.380 lat (usec): min=17692, max=31761, avg=26655.96, stdev=2067.92 00:36:14.380 clat percentiles (usec): 00:36:14.380 | 1.00th=[23987], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:36:14.380 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:36:14.380 | 70.00th=[27395], 80.00th=[28443], 90.00th=[30016], 95.00th=[30540], 00:36:14.380 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:36:14.380 | 99.99th=[31851] 00:36:14.380 bw ( KiB/s): min= 2176, max= 2565, per=4.15%, avg=2377.53, stdev=143.52, samples=19 00:36:14.380 iops : min= 544, max= 641, avg=594.26, stdev=35.84, samples=19 00:36:14.380 lat (msec) : 20=0.54%, 50=99.46% 00:36:14.380 cpu : usr=98.05%, sys=1.34%, ctx=68, majf=0, minf=9 00:36:14.380 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:14.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.380 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.380 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.380 filename0: (groupid=0, jobs=1): err= 0: pid=1922811: Tue Dec 10 14:38:13 2024 00:36:14.380 read: IOPS=594, BW=2379KiB/s (2436kB/s)(23.2MiB/10008msec) 00:36:14.380 slat (usec): min=6, max=104, avg=47.81, stdev=17.67 00:36:14.380 clat (usec): min=10445, max=31783, avg=26466.23, stdev=2375.12 00:36:14.380 lat (usec): min=10465, max=31818, avg=26514.04, stdev=2379.30 00:36:14.380 clat percentiles (usec): 00:36:14.380 | 1.00th=[17957], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:36:14.380 | 30.00th=[25035], 40.00th=[25297], 50.00th=[26084], 60.00th=[26870], 00:36:14.380 | 70.00th=[27395], 80.00th=[28443], 90.00th=[29754], 95.00th=[30540], 00:36:14.380 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:36:14.380 | 99.99th=[31851] 00:36:14.380 bw ( KiB/s): min= 2171, max= 2816, per=4.18%, avg=2390.26, stdev=160.05, samples=19 00:36:14.380 iops : min= 542, max= 704, avg=597.37, stdev=40.03, samples=19 00:36:14.380 lat (msec) : 20=1.04%, 50=98.96% 00:36:14.380 cpu : usr=98.97%, sys=0.64%, ctx=14, majf=0, minf=9 00:36:14.380 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:14.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.380 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.380 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.380 filename0: (groupid=0, jobs=1): err= 0: pid=1922812: Tue Dec 10 14:38:13 2024 00:36:14.380 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10003msec) 00:36:14.380 slat (usec): min=5, max=111, avg=46.34, stdev=19.53 00:36:14.380 clat (usec): min=10817, max=45088, avg=26619.72, stdev=2341.43 00:36:14.380 lat (usec): min=10831, max=45104, avg=26666.06, stdev=2343.14 00:36:14.380 clat percentiles (usec): 00:36:14.380 | 1.00th=[23725], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:36:14.380 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26870], 00:36:14.380 | 70.00th=[27395], 80.00th=[28705], 90.00th=[30016], 95.00th=[30540], 00:36:14.380 | 99.00th=[31065], 99.50th=[31327], 99.90th=[44827], 99.95th=[44827], 00:36:14.380 | 99.99th=[44827] 00:36:14.380 bw ( KiB/s): min= 2176, max= 2560, per=4.14%, avg=2369.95, stdev=136.64, samples=19 00:36:14.380 iops : min= 544, max= 640, avg=592.32, stdev=34.11, samples=19 00:36:14.380 lat (msec) : 20=0.44%, 50=99.56% 00:36:14.380 cpu : usr=98.58%, sys=0.92%, ctx=50, majf=0, minf=9 00:36:14.380 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:14.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.380 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.380 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.380 filename0: (groupid=0, jobs=1): err= 0: pid=1922813: Tue Dec 10 14:38:13 2024 00:36:14.380 read: IOPS=696, BW=2786KiB/s (2852kB/s)(27.2MiB/10006msec) 00:36:14.380 slat (nsec): min=3793, max=94132, avg=16074.67, stdev=14515.82 00:36:14.380 clat (usec): min=7437, max=44612, avg=22893.35, stdev=5167.19 00:36:14.380 lat (usec): min=7453, max=44661, avg=22909.43, stdev=5169.23 00:36:14.380 clat percentiles (usec): 00:36:14.380 | 1.00th=[13829], 5.00th=[14615], 10.00th=[15139], 20.00th=[18744], 00:36:14.380 | 30.00th=[19792], 40.00th=[21103], 50.00th=[23987], 60.00th=[25035], 00:36:14.380 | 70.00th=[25297], 80.00th=[26608], 90.00th=[29230], 95.00th=[30802], 00:36:14.380 | 99.00th=[37487], 99.50th=[39060], 99.90th=[44303], 99.95th=[44303], 00:36:14.380 | 99.99th=[44827] 00:36:14.380 bw ( KiB/s): min= 2256, max= 3369, per=4.83%, avg=2765.53, stdev=315.04, samples=19 00:36:14.380 iops : min= 564, max= 842, avg=691.26, stdev=78.74, samples=19 00:36:14.380 lat (msec) : 10=0.46%, 20=30.86%, 50=68.69% 00:36:14.380 cpu : usr=98.65%, sys=0.95%, ctx=34, majf=0, minf=9 00:36:14.380 IO depths : 1=0.1%, 2=0.4%, 4=4.8%, 8=79.9%, 16=14.9%, 32=0.0%, >=64=0.0% 00:36:14.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.380 complete : 0=0.0%, 4=89.3%, 8=7.5%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.380 issued rwts: total=6968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.380 filename0: (groupid=0, jobs=1): err= 0: pid=1922814: Tue Dec 10 14:38:13 2024 00:36:14.380 read: IOPS=591, BW=2366KiB/s (2422kB/s)(23.1MiB/10020msec) 00:36:14.380 slat (usec): min=7, max=119, avg=37.35, stdev=22.08 00:36:14.380 clat (usec): min=17980, max=31726, avg=26697.60, stdev=2008.06 00:36:14.380 lat (usec): min=17988, max=31771, avg=26734.96, stdev=2013.74 00:36:14.380 clat percentiles (usec): 00:36:14.380 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:36:14.380 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26084], 60.00th=[26870], 00:36:14.380 | 70.00th=[27395], 80.00th=[28705], 90.00th=[30016], 95.00th=[30540], 00:36:14.380 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:36:14.380 | 99.99th=[31851] 00:36:14.380 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2377.00, stdev=143.05, samples=19 00:36:14.380 iops : min= 544, max= 640, avg=594.11, stdev=35.72, samples=19 00:36:14.380 lat (msec) : 20=0.32%, 50=99.68% 00:36:14.380 cpu : usr=98.22%, sys=1.21%, ctx=75, majf=0, minf=9 00:36:14.380 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:14.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.381 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.381 issued rwts: total=5926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.381 filename0: (groupid=0, jobs=1): err= 0: pid=1922815: Tue Dec 10 14:38:13 2024 00:36:14.381 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10003msec) 00:36:14.381 slat (nsec): min=10266, max=96618, avg=45931.89, stdev=14804.18 00:36:14.381 clat (usec): min=10884, max=46511, avg=26647.01, stdev=2375.17 00:36:14.381 lat (usec): min=10908, max=46532, avg=26692.94, stdev=2374.98 00:36:14.381 clat percentiles (usec): 00:36:14.381 | 1.00th=[23725], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:36:14.381 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26870], 00:36:14.381 | 70.00th=[27395], 80.00th=[28705], 90.00th=[30016], 95.00th=[30540], 00:36:14.381 | 99.00th=[31327], 99.50th=[31589], 99.90th=[44827], 99.95th=[44827], 00:36:14.381 | 99.99th=[46400] 00:36:14.381 bw ( KiB/s): min= 2176, max= 2560, per=4.14%, avg=2370.16, stdev=136.75, samples=19 00:36:14.381 iops : min= 544, max= 640, avg=592.37, stdev=34.14, samples=19 00:36:14.381 lat (msec) : 20=0.35%, 50=99.65% 00:36:14.381 cpu : usr=98.07%, sys=1.30%, ctx=127, majf=0, minf=9 00:36:14.381 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:14.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.381 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.381 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.381 filename1: (groupid=0, jobs=1): err= 0: pid=1922817: Tue Dec 10 14:38:13 2024 00:36:14.381 read: IOPS=594, BW=2379KiB/s (2436kB/s)(23.2MiB/10008msec) 00:36:14.381 slat (nsec): min=9530, max=96628, avg=44816.28, stdev=16950.04 00:36:14.381 clat (usec): min=9667, max=31732, avg=26543.15, stdev=2399.76 00:36:14.381 lat (usec): min=9689, max=31749, avg=26587.97, stdev=2401.85 00:36:14.381 clat percentiles (usec): 00:36:14.381 | 1.00th=[17957], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:36:14.381 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:36:14.381 | 70.00th=[27395], 80.00th=[28443], 90.00th=[30016], 95.00th=[30540], 00:36:14.381 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:36:14.381 | 99.99th=[31851] 00:36:14.381 bw ( KiB/s): min= 2171, max= 2816, per=4.18%, avg=2390.26, stdev=160.05, samples=19 00:36:14.381 iops : min= 542, max= 704, avg=597.37, stdev=40.03, samples=19 00:36:14.381 lat (msec) : 10=0.12%, 20=0.96%, 50=98.92% 00:36:14.381 cpu : usr=97.98%, sys=1.26%, ctx=182, majf=0, minf=9 00:36:14.381 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:14.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.381 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.381 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.381 filename1: (groupid=0, jobs=1): err= 0: pid=1922818: Tue Dec 10 14:38:13 2024 00:36:14.381 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10004msec) 00:36:14.381 slat (nsec): min=3995, max=99335, avg=47397.28, stdev=15456.54 00:36:14.381 clat (usec): min=12983, max=34670, avg=26618.86, stdev=2067.95 00:36:14.381 lat (usec): min=12991, max=34686, avg=26666.26, stdev=2069.62 00:36:14.381 clat percentiles (usec): 00:36:14.381 | 1.00th=[23987], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:36:14.381 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:36:14.381 | 70.00th=[27395], 80.00th=[28443], 90.00th=[30016], 95.00th=[30540], 00:36:14.381 | 99.00th=[31327], 99.50th=[31589], 99.90th=[33424], 99.95th=[33424], 00:36:14.381 | 99.99th=[34866] 00:36:14.381 bw ( KiB/s): min= 2176, max= 2688, per=4.14%, avg=2370.47, stdev=136.68, samples=19 00:36:14.381 iops : min= 544, max= 672, avg=592.47, stdev=34.08, samples=19 00:36:14.381 lat (msec) : 20=0.30%, 50=99.70% 00:36:14.381 cpu : usr=98.40%, sys=1.05%, ctx=68, majf=0, minf=9 00:36:14.381 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:14.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.381 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.381 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.381 filename1: (groupid=0, jobs=1): err= 0: pid=1922819: Tue Dec 10 14:38:13 2024 00:36:14.381 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10004msec) 00:36:14.381 slat (nsec): min=6865, max=90541, avg=45801.00, stdev=14734.71 00:36:14.381 clat (usec): min=10818, max=45680, avg=26647.52, stdev=2381.18 00:36:14.381 lat (usec): min=10834, max=45695, avg=26693.33, stdev=2381.06 00:36:14.381 clat percentiles (usec): 00:36:14.381 | 1.00th=[23725], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:36:14.381 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26870], 00:36:14.381 | 70.00th=[27395], 80.00th=[28705], 90.00th=[30016], 95.00th=[30540], 00:36:14.381 | 99.00th=[31327], 99.50th=[31589], 99.90th=[45876], 99.95th=[45876], 00:36:14.381 | 99.99th=[45876] 00:36:14.381 bw ( KiB/s): min= 2176, max= 2560, per=4.14%, avg=2369.95, stdev=136.64, samples=19 00:36:14.381 iops : min= 544, max= 640, avg=592.32, stdev=34.11, samples=19 00:36:14.381 lat (msec) : 20=0.34%, 50=99.66% 00:36:14.381 cpu : usr=98.20%, sys=1.17%, ctx=86, majf=0, minf=9 00:36:14.381 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:14.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.381 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.381 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.381 filename1: (groupid=0, jobs=1): err= 0: pid=1922820: Tue Dec 10 14:38:13 2024 00:36:14.381 read: IOPS=591, BW=2366KiB/s (2422kB/s)(23.1MiB/10020msec) 00:36:14.381 slat (nsec): min=7291, max=83831, avg=30785.45, stdev=16930.10 00:36:14.381 clat (usec): min=16505, max=33332, avg=26785.03, stdev=2057.59 00:36:14.381 lat (usec): min=16515, max=33359, avg=26815.81, stdev=2057.47 00:36:14.381 clat percentiles (usec): 00:36:14.381 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:36:14.381 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:36:14.381 | 70.00th=[27395], 80.00th=[28705], 90.00th=[30278], 95.00th=[30802], 00:36:14.381 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:36:14.381 | 99.99th=[33424] 00:36:14.381 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2377.00, stdev=143.05, samples=19 00:36:14.381 iops : min= 544, max= 640, avg=594.11, stdev=35.72, samples=19 00:36:14.381 lat (msec) : 20=0.32%, 50=99.68% 00:36:14.381 cpu : usr=98.46%, sys=1.00%, ctx=47, majf=0, minf=9 00:36:14.381 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:14.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.381 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.381 issued rwts: total=5926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.381 filename1: (groupid=0, jobs=1): err= 0: pid=1922821: Tue Dec 10 14:38:13 2024 00:36:14.381 read: IOPS=594, BW=2379KiB/s (2436kB/s)(23.2MiB/10007msec) 00:36:14.381 slat (nsec): min=7619, max=99051, avg=36893.94, stdev=18594.66 00:36:14.381 clat (usec): min=7456, max=34653, avg=26625.01, stdev=2415.15 00:36:14.381 lat (usec): min=7468, max=34691, avg=26661.90, stdev=2416.61 00:36:14.381 clat percentiles (usec): 00:36:14.381 | 1.00th=[17957], 5.00th=[24249], 10.00th=[24511], 20.00th=[25035], 00:36:14.381 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:36:14.381 | 70.00th=[27395], 80.00th=[28705], 90.00th=[30016], 95.00th=[30540], 00:36:14.381 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:36:14.381 | 99.99th=[34866] 00:36:14.381 bw ( KiB/s): min= 2171, max= 2816, per=4.18%, avg=2390.26, stdev=160.05, samples=19 00:36:14.381 iops : min= 542, max= 704, avg=597.37, stdev=40.03, samples=19 00:36:14.381 lat (msec) : 10=0.03%, 20=1.08%, 50=98.89% 00:36:14.381 cpu : usr=97.72%, sys=1.43%, ctx=150, majf=0, minf=9 00:36:14.381 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:14.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.381 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.381 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.381 filename1: (groupid=0, jobs=1): err= 0: pid=1922822: Tue Dec 10 14:38:13 2024 00:36:14.381 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10003msec) 00:36:14.381 slat (usec): min=5, max=116, avg=47.28, stdev=19.53 00:36:14.381 clat (usec): min=10698, max=45221, avg=26594.90, stdev=2351.79 00:36:14.381 lat (usec): min=10712, max=45237, avg=26642.19, stdev=2353.89 00:36:14.381 clat percentiles (usec): 00:36:14.381 | 1.00th=[23987], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:36:14.381 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26870], 00:36:14.381 | 70.00th=[27395], 80.00th=[28443], 90.00th=[30016], 95.00th=[30540], 00:36:14.381 | 99.00th=[31065], 99.50th=[31327], 99.90th=[45351], 99.95th=[45351], 00:36:14.381 | 99.99th=[45351] 00:36:14.381 bw ( KiB/s): min= 2176, max= 2560, per=4.14%, avg=2369.95, stdev=136.64, samples=19 00:36:14.381 iops : min= 544, max= 640, avg=592.32, stdev=34.11, samples=19 00:36:14.381 lat (msec) : 20=0.41%, 50=99.59% 00:36:14.381 cpu : usr=98.99%, sys=0.63%, ctx=35, majf=0, minf=9 00:36:14.381 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:14.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.381 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.381 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.381 filename1: (groupid=0, jobs=1): err= 0: pid=1922823: Tue Dec 10 14:38:13 2024 00:36:14.381 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10004msec) 00:36:14.381 slat (nsec): min=4465, max=39019, avg=10139.21, stdev=4614.91 00:36:14.381 clat (usec): min=13415, max=45535, avg=26940.96, stdev=2366.81 00:36:14.381 lat (usec): min=13424, max=45543, avg=26951.10, stdev=2366.91 00:36:14.381 clat percentiles (usec): 00:36:14.381 | 1.00th=[23725], 5.00th=[24511], 10.00th=[25035], 20.00th=[25035], 00:36:14.381 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[27132], 00:36:14.381 | 70.00th=[27657], 80.00th=[28967], 90.00th=[30278], 95.00th=[31065], 00:36:14.381 | 99.00th=[31589], 99.50th=[36439], 99.90th=[43779], 99.95th=[44827], 00:36:14.381 | 99.99th=[45351] 00:36:14.381 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2378.05, stdev=136.82, samples=19 00:36:14.381 iops : min= 544, max= 640, avg=594.47, stdev=34.19, samples=19 00:36:14.382 lat (msec) : 20=0.51%, 50=99.49% 00:36:14.382 cpu : usr=98.30%, sys=1.13%, ctx=145, majf=0, minf=9 00:36:14.382 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:14.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.382 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.382 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.382 filename1: (groupid=0, jobs=1): err= 0: pid=1922824: Tue Dec 10 14:38:13 2024 00:36:14.382 read: IOPS=592, BW=2371KiB/s (2428kB/s)(23.2MiB/10015msec) 00:36:14.382 slat (nsec): min=8444, max=96364, avg=47273.91, stdev=15430.78 00:36:14.382 clat (usec): min=13175, max=31730, avg=26588.04, stdev=2076.37 00:36:14.382 lat (usec): min=13190, max=31754, avg=26635.32, stdev=2078.48 00:36:14.382 clat percentiles (usec): 00:36:14.382 | 1.00th=[23987], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:36:14.382 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26870], 00:36:14.382 | 70.00th=[27395], 80.00th=[28443], 90.00th=[30016], 95.00th=[30540], 00:36:14.382 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:36:14.382 | 99.99th=[31851] 00:36:14.382 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2377.53, stdev=149.24, samples=19 00:36:14.382 iops : min= 544, max= 640, avg=594.32, stdev=37.26, samples=19 00:36:14.382 lat (msec) : 20=0.54%, 50=99.46% 00:36:14.382 cpu : usr=97.79%, sys=1.36%, ctx=200, majf=0, minf=9 00:36:14.382 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:14.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.382 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.382 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.382 filename2: (groupid=0, jobs=1): err= 0: pid=1922825: Tue Dec 10 14:38:13 2024 00:36:14.382 read: IOPS=591, BW=2366KiB/s (2423kB/s)(23.1MiB/10008msec) 00:36:14.382 slat (nsec): min=3786, max=92791, avg=45569.28, stdev=14944.95 00:36:14.382 clat (usec): min=20000, max=36959, avg=26663.05, stdev=2080.70 00:36:14.382 lat (usec): min=20018, max=36971, avg=26708.62, stdev=2080.99 00:36:14.382 clat percentiles (usec): 00:36:14.382 | 1.00th=[23725], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:36:14.382 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:36:14.382 | 70.00th=[27395], 80.00th=[28705], 90.00th=[30016], 95.00th=[30540], 00:36:14.382 | 99.00th=[31327], 99.50th=[31589], 99.90th=[36963], 99.95th=[36963], 00:36:14.382 | 99.99th=[36963] 00:36:14.382 bw ( KiB/s): min= 2176, max= 2688, per=4.14%, avg=2371.11, stdev=137.27, samples=19 00:36:14.382 iops : min= 544, max= 672, avg=592.74, stdev=34.30, samples=19 00:36:14.382 lat (msec) : 50=100.00% 00:36:14.382 cpu : usr=98.63%, sys=0.84%, ctx=75, majf=0, minf=9 00:36:14.382 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:14.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.382 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.382 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.382 filename2: (groupid=0, jobs=1): err= 0: pid=1922826: Tue Dec 10 14:38:13 2024 00:36:14.382 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10003msec) 00:36:14.382 slat (usec): min=6, max=116, avg=47.56, stdev=20.04 00:36:14.382 clat (usec): min=10735, max=46589, avg=26574.60, stdev=2361.68 00:36:14.382 lat (usec): min=10752, max=46609, avg=26622.17, stdev=2363.85 00:36:14.382 clat percentiles (usec): 00:36:14.382 | 1.00th=[23725], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:36:14.382 | 30.00th=[25035], 40.00th=[25297], 50.00th=[26084], 60.00th=[26870], 00:36:14.382 | 70.00th=[27395], 80.00th=[28443], 90.00th=[30016], 95.00th=[30540], 00:36:14.382 | 99.00th=[31065], 99.50th=[31327], 99.90th=[44827], 99.95th=[44827], 00:36:14.382 | 99.99th=[46400] 00:36:14.382 bw ( KiB/s): min= 2176, max= 2560, per=4.14%, avg=2369.95, stdev=136.64, samples=19 00:36:14.382 iops : min= 544, max= 640, avg=592.32, stdev=34.11, samples=19 00:36:14.382 lat (msec) : 20=0.42%, 50=99.58% 00:36:14.382 cpu : usr=99.04%, sys=0.57%, ctx=34, majf=0, minf=9 00:36:14.382 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:14.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.382 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.382 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.382 filename2: (groupid=0, jobs=1): err= 0: pid=1922827: Tue Dec 10 14:38:13 2024 00:36:14.382 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10004msec) 00:36:14.382 slat (nsec): min=9180, max=90035, avg=43179.09, stdev=16011.65 00:36:14.382 clat (usec): min=10890, max=45611, avg=26695.08, stdev=2377.45 00:36:14.382 lat (usec): min=10963, max=45627, avg=26738.26, stdev=2377.02 00:36:14.382 clat percentiles (usec): 00:36:14.382 | 1.00th=[23725], 5.00th=[24249], 10.00th=[24511], 20.00th=[25035], 00:36:14.382 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:36:14.382 | 70.00th=[27395], 80.00th=[28705], 90.00th=[30016], 95.00th=[30802], 00:36:14.382 | 99.00th=[31327], 99.50th=[31589], 99.90th=[45351], 99.95th=[45351], 00:36:14.382 | 99.99th=[45351] 00:36:14.382 bw ( KiB/s): min= 2176, max= 2560, per=4.14%, avg=2369.95, stdev=136.64, samples=19 00:36:14.382 iops : min= 544, max= 640, avg=592.32, stdev=34.11, samples=19 00:36:14.382 lat (msec) : 20=0.30%, 50=99.70% 00:36:14.382 cpu : usr=97.82%, sys=1.43%, ctx=126, majf=0, minf=9 00:36:14.382 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:14.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.382 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.382 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.382 filename2: (groupid=0, jobs=1): err= 0: pid=1922828: Tue Dec 10 14:38:13 2024 00:36:14.382 read: IOPS=590, BW=2364KiB/s (2421kB/s)(23.1MiB/10019msec) 00:36:14.382 slat (usec): min=8, max=105, avg=46.94, stdev=18.18 00:36:14.382 clat (usec): min=13146, max=35925, avg=26599.81, stdev=2040.23 00:36:14.382 lat (usec): min=13159, max=35951, avg=26646.75, stdev=2043.50 00:36:14.382 clat percentiles (usec): 00:36:14.382 | 1.00th=[23987], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:36:14.382 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26870], 00:36:14.382 | 70.00th=[27395], 80.00th=[28443], 90.00th=[30016], 95.00th=[30540], 00:36:14.382 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31851], 00:36:14.382 | 99.99th=[35914] 00:36:14.382 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2377.26, stdev=143.16, samples=19 00:36:14.382 iops : min= 544, max= 640, avg=594.21, stdev=35.76, samples=19 00:36:14.382 lat (msec) : 20=0.29%, 50=99.71% 00:36:14.382 cpu : usr=98.87%, sys=0.74%, ctx=28, majf=0, minf=9 00:36:14.382 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:14.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.382 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.382 issued rwts: total=5921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.382 filename2: (groupid=0, jobs=1): err= 0: pid=1922830: Tue Dec 10 14:38:13 2024 00:36:14.382 read: IOPS=591, BW=2366KiB/s (2423kB/s)(23.2MiB/10020msec) 00:36:14.382 slat (usec): min=6, max=108, avg=29.47, stdev=21.41 00:36:14.382 clat (usec): min=18021, max=31795, avg=26775.64, stdev=2034.77 00:36:14.382 lat (usec): min=18033, max=31820, avg=26805.11, stdev=2038.42 00:36:14.382 clat percentiles (usec): 00:36:14.382 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:36:14.382 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26870], 00:36:14.382 | 70.00th=[27395], 80.00th=[28705], 90.00th=[30016], 95.00th=[30802], 00:36:14.382 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:36:14.382 | 99.99th=[31851] 00:36:14.382 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2377.00, stdev=143.05, samples=19 00:36:14.382 iops : min= 544, max= 640, avg=594.11, stdev=35.72, samples=19 00:36:14.382 lat (msec) : 20=0.34%, 50=99.66% 00:36:14.382 cpu : usr=97.27%, sys=1.48%, ctx=575, majf=0, minf=9 00:36:14.382 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:14.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.382 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.382 issued rwts: total=5928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.382 filename2: (groupid=0, jobs=1): err= 0: pid=1922831: Tue Dec 10 14:38:13 2024 00:36:14.382 read: IOPS=594, BW=2379KiB/s (2436kB/s)(23.2MiB/10008msec) 00:36:14.382 slat (usec): min=7, max=104, avg=46.05, stdev=19.09 00:36:14.382 clat (usec): min=10401, max=31741, avg=26495.89, stdev=2355.74 00:36:14.382 lat (usec): min=10434, max=31772, avg=26541.93, stdev=2361.21 00:36:14.382 clat percentiles (usec): 00:36:14.382 | 1.00th=[17957], 5.00th=[24249], 10.00th=[24511], 20.00th=[25035], 00:36:14.382 | 30.00th=[25035], 40.00th=[25297], 50.00th=[26084], 60.00th=[26870], 00:36:14.382 | 70.00th=[27395], 80.00th=[28443], 90.00th=[29754], 95.00th=[30540], 00:36:14.382 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:36:14.382 | 99.99th=[31851] 00:36:14.382 bw ( KiB/s): min= 2171, max= 2816, per=4.18%, avg=2390.26, stdev=160.05, samples=19 00:36:14.382 iops : min= 542, max= 704, avg=597.37, stdev=40.03, samples=19 00:36:14.382 lat (msec) : 20=1.08%, 50=98.92% 00:36:14.382 cpu : usr=99.12%, sys=0.48%, ctx=18, majf=0, minf=9 00:36:14.382 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:14.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.382 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.382 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.382 filename2: (groupid=0, jobs=1): err= 0: pid=1922832: Tue Dec 10 14:38:13 2024 00:36:14.382 read: IOPS=594, BW=2379KiB/s (2436kB/s)(23.2MiB/10007msec) 00:36:14.382 slat (nsec): min=7139, max=92039, avg=39236.41, stdev=18113.52 00:36:14.382 clat (usec): min=10401, max=31716, avg=26607.31, stdev=2407.55 00:36:14.382 lat (usec): min=10413, max=31733, avg=26646.55, stdev=2408.18 00:36:14.382 clat percentiles (usec): 00:36:14.382 | 1.00th=[17957], 5.00th=[24249], 10.00th=[24511], 20.00th=[25035], 00:36:14.382 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:36:14.383 | 70.00th=[27395], 80.00th=[28705], 90.00th=[30016], 95.00th=[30540], 00:36:14.383 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31589], 99.95th=[31589], 00:36:14.383 | 99.99th=[31589] 00:36:14.383 bw ( KiB/s): min= 2171, max= 2816, per=4.18%, avg=2390.26, stdev=160.05, samples=19 00:36:14.383 iops : min= 542, max= 704, avg=597.37, stdev=40.03, samples=19 00:36:14.383 lat (msec) : 20=1.08%, 50=98.92% 00:36:14.383 cpu : usr=98.24%, sys=1.06%, ctx=158, majf=0, minf=9 00:36:14.383 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:14.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.383 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.383 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.383 filename2: (groupid=0, jobs=1): err= 0: pid=1922833: Tue Dec 10 14:38:13 2024 00:36:14.383 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10003msec) 00:36:14.383 slat (nsec): min=8176, max=84545, avg=36094.00, stdev=17387.35 00:36:14.383 clat (usec): min=10380, max=45074, avg=26763.50, stdev=2365.58 00:36:14.383 lat (usec): min=10390, max=45093, avg=26799.59, stdev=2364.81 00:36:14.383 clat percentiles (usec): 00:36:14.383 | 1.00th=[23725], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:36:14.383 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:36:14.383 | 70.00th=[27395], 80.00th=[28705], 90.00th=[30016], 95.00th=[30802], 00:36:14.383 | 99.00th=[31327], 99.50th=[31589], 99.90th=[44827], 99.95th=[44827], 00:36:14.383 | 99.99th=[44827] 00:36:14.383 bw ( KiB/s): min= 2176, max= 2560, per=4.14%, avg=2370.16, stdev=136.75, samples=19 00:36:14.383 iops : min= 544, max= 640, avg=592.37, stdev=34.14, samples=19 00:36:14.383 lat (msec) : 20=0.29%, 50=99.71% 00:36:14.383 cpu : usr=98.25%, sys=1.13%, ctx=63, majf=0, minf=9 00:36:14.383 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:14.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.383 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.383 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:14.383 00:36:14.383 Run status group 0 (all jobs): 00:36:14.383 READ: bw=55.9MiB/s (58.6MB/s), 2364KiB/s-2786KiB/s (2421kB/s-2852kB/s), io=560MiB (587MB), run=10003-10020msec 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.383 bdev_null0 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.383 [2024-12-10 14:38:13.748817] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.383 bdev_null1 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:14.383 14:38:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:14.383 { 00:36:14.383 "params": { 00:36:14.383 "name": "Nvme$subsystem", 00:36:14.383 "trtype": "$TEST_TRANSPORT", 00:36:14.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:14.383 "adrfam": "ipv4", 00:36:14.383 "trsvcid": "$NVMF_PORT", 00:36:14.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:14.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:14.384 "hdgst": ${hdgst:-false}, 00:36:14.384 "ddgst": ${ddgst:-false} 00:36:14.384 }, 00:36:14.384 "method": "bdev_nvme_attach_controller" 00:36:14.384 } 00:36:14.384 EOF 00:36:14.384 )") 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:14.384 { 00:36:14.384 "params": { 00:36:14.384 "name": "Nvme$subsystem", 00:36:14.384 "trtype": "$TEST_TRANSPORT", 00:36:14.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:14.384 "adrfam": "ipv4", 00:36:14.384 "trsvcid": "$NVMF_PORT", 00:36:14.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:14.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:14.384 "hdgst": ${hdgst:-false}, 00:36:14.384 "ddgst": ${ddgst:-false} 00:36:14.384 }, 00:36:14.384 "method": "bdev_nvme_attach_controller" 00:36:14.384 } 00:36:14.384 EOF 00:36:14.384 )") 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:14.384 "params": { 00:36:14.384 "name": "Nvme0", 00:36:14.384 "trtype": "tcp", 00:36:14.384 "traddr": "10.0.0.2", 00:36:14.384 "adrfam": "ipv4", 00:36:14.384 "trsvcid": "4420", 00:36:14.384 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:14.384 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:14.384 "hdgst": false, 00:36:14.384 "ddgst": false 00:36:14.384 }, 00:36:14.384 "method": "bdev_nvme_attach_controller" 00:36:14.384 },{ 00:36:14.384 "params": { 00:36:14.384 "name": "Nvme1", 00:36:14.384 "trtype": "tcp", 00:36:14.384 "traddr": "10.0.0.2", 00:36:14.384 "adrfam": "ipv4", 00:36:14.384 "trsvcid": "4420", 00:36:14.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:14.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:14.384 "hdgst": false, 00:36:14.384 "ddgst": false 00:36:14.384 }, 00:36:14.384 "method": "bdev_nvme_attach_controller" 00:36:14.384 }' 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:14.384 14:38:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:14.384 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:14.384 ... 00:36:14.384 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:14.384 ... 00:36:14.384 fio-3.35 00:36:14.384 Starting 4 threads 00:36:19.651 00:36:19.651 filename0: (groupid=0, jobs=1): err= 0: pid=1924757: Tue Dec 10 14:38:19 2024 00:36:19.651 read: IOPS=2657, BW=20.8MiB/s (21.8MB/s)(104MiB/5001msec) 00:36:19.651 slat (nsec): min=5981, max=64232, avg=11392.43, stdev=7031.04 00:36:19.651 clat (usec): min=713, max=6022, avg=2975.96, stdev=459.28 00:36:19.651 lat (usec): min=742, max=6034, avg=2987.35, stdev=459.38 00:36:19.651 clat percentiles (usec): 00:36:19.651 | 1.00th=[ 1958], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2671], 00:36:19.651 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2999], 00:36:19.651 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3458], 95.00th=[ 3752], 00:36:19.651 | 99.00th=[ 4555], 99.50th=[ 4883], 99.90th=[ 5276], 99.95th=[ 5473], 00:36:19.651 | 99.99th=[ 5997] 00:36:19.651 bw ( KiB/s): min=20048, max=22320, per=24.94%, avg=21265.78, stdev=661.90, samples=9 00:36:19.651 iops : min= 2506, max= 2790, avg=2658.22, stdev=82.74, samples=9 00:36:19.651 lat (usec) : 750=0.01% 00:36:19.651 lat (msec) : 2=1.24%, 4=95.50%, 10=3.25% 00:36:19.651 cpu : usr=96.64%, sys=3.02%, ctx=11, majf=0, minf=9 00:36:19.651 IO depths : 1=0.4%, 2=7.4%, 4=63.7%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:19.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.651 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.651 issued rwts: total=13288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:19.651 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:19.651 filename0: (groupid=0, jobs=1): err= 0: pid=1924758: Tue Dec 10 14:38:19 2024 00:36:19.651 read: IOPS=2635, BW=20.6MiB/s (21.6MB/s)(103MiB/5001msec) 00:36:19.651 slat (nsec): min=5978, max=63926, avg=11808.19, stdev=7756.43 00:36:19.651 clat (usec): min=610, max=5910, avg=2997.89, stdev=520.70 00:36:19.651 lat (usec): min=621, max=5925, avg=3009.70, stdev=520.53 00:36:19.651 clat percentiles (usec): 00:36:19.651 | 1.00th=[ 1778], 5.00th=[ 2245], 10.00th=[ 2442], 20.00th=[ 2671], 00:36:19.651 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 3032], 00:36:19.651 | 70.00th=[ 3130], 80.00th=[ 3261], 90.00th=[ 3556], 95.00th=[ 4015], 00:36:19.651 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 5473], 99.95th=[ 5604], 00:36:19.651 | 99.99th=[ 5800] 00:36:19.651 bw ( KiB/s): min=19696, max=22272, per=24.66%, avg=21032.89, stdev=725.72, samples=9 00:36:19.651 iops : min= 2462, max= 2784, avg=2629.11, stdev=90.71, samples=9 00:36:19.651 lat (usec) : 750=0.02%, 1000=0.09% 00:36:19.651 lat (msec) : 2=1.95%, 4=92.93%, 10=5.01% 00:36:19.651 cpu : usr=96.02%, sys=3.36%, ctx=133, majf=0, minf=9 00:36:19.651 IO depths : 1=0.2%, 2=8.0%, 4=63.4%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:19.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.651 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.651 issued rwts: total=13182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:19.651 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:19.651 filename1: (groupid=0, jobs=1): err= 0: pid=1924759: Tue Dec 10 14:38:19 2024 00:36:19.651 read: IOPS=2780, BW=21.7MiB/s (22.8MB/s)(109MiB/5003msec) 00:36:19.651 slat (nsec): min=5983, max=68060, avg=11656.66, stdev=6928.58 00:36:19.651 clat (usec): min=556, max=5667, avg=2841.03, stdev=436.74 00:36:19.651 lat (usec): min=562, max=5679, avg=2852.68, stdev=437.25 00:36:19.651 clat percentiles (usec): 00:36:19.651 | 1.00th=[ 1696], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2507], 00:36:19.651 | 30.00th=[ 2671], 40.00th=[ 2802], 50.00th=[ 2868], 60.00th=[ 2933], 00:36:19.651 | 70.00th=[ 3032], 80.00th=[ 3130], 90.00th=[ 3294], 95.00th=[ 3490], 00:36:19.651 | 99.00th=[ 4113], 99.50th=[ 4424], 99.90th=[ 5080], 99.95th=[ 5276], 00:36:19.651 | 99.99th=[ 5342] 00:36:19.651 bw ( KiB/s): min=20800, max=24064, per=26.09%, avg=22251.20, stdev=1023.83, samples=10 00:36:19.651 iops : min= 2600, max= 3008, avg=2781.40, stdev=127.98, samples=10 00:36:19.651 lat (usec) : 750=0.02%, 1000=0.03% 00:36:19.651 lat (msec) : 2=2.65%, 4=96.01%, 10=1.29% 00:36:19.651 cpu : usr=96.74%, sys=2.92%, ctx=7, majf=0, minf=9 00:36:19.651 IO depths : 1=0.5%, 2=8.5%, 4=61.7%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:19.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.651 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.651 issued rwts: total=13913,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:19.651 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:19.651 filename1: (groupid=0, jobs=1): err= 0: pid=1924760: Tue Dec 10 14:38:19 2024 00:36:19.651 read: IOPS=2588, BW=20.2MiB/s (21.2MB/s)(101MiB/5001msec) 00:36:19.651 slat (nsec): min=5843, max=72106, avg=12365.44, stdev=7216.06 00:36:19.651 clat (usec): min=761, max=6261, avg=3053.97, stdev=520.76 00:36:19.651 lat (usec): min=771, max=6267, avg=3066.34, stdev=520.31 00:36:19.651 clat percentiles (usec): 00:36:19.651 | 1.00th=[ 1860], 5.00th=[ 2278], 10.00th=[ 2507], 20.00th=[ 2737], 00:36:19.651 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3064], 00:36:19.651 | 70.00th=[ 3163], 80.00th=[ 3326], 90.00th=[ 3654], 95.00th=[ 4080], 00:36:19.651 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 5538], 99.95th=[ 5604], 00:36:19.651 | 99.99th=[ 6259] 00:36:19.651 bw ( KiB/s): min=18960, max=22144, per=24.15%, avg=20592.00, stdev=889.04, samples=9 00:36:19.651 iops : min= 2370, max= 2768, avg=2574.00, stdev=111.13, samples=9 00:36:19.651 lat (usec) : 1000=0.08% 00:36:19.651 lat (msec) : 2=1.68%, 4=92.60%, 10=5.64% 00:36:19.651 cpu : usr=96.54%, sys=3.00%, ctx=78, majf=0, minf=9 00:36:19.651 IO depths : 1=0.3%, 2=3.9%, 4=67.6%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:19.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.651 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.651 issued rwts: total=12946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:19.651 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:19.651 00:36:19.651 Run status group 0 (all jobs): 00:36:19.651 READ: bw=83.3MiB/s (87.3MB/s), 20.2MiB/s-21.7MiB/s (21.2MB/s-22.8MB/s), io=417MiB (437MB), run=5001-5003msec 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.651 00:36:19.651 real 0m24.550s 00:36:19.651 user 4m52.201s 00:36:19.651 sys 0m4.964s 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:19.651 14:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:19.651 ************************************ 00:36:19.651 END TEST fio_dif_rand_params 00:36:19.651 ************************************ 00:36:19.651 14:38:20 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:19.651 14:38:20 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:19.651 14:38:20 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:19.651 14:38:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:19.651 ************************************ 00:36:19.651 START TEST fio_dif_digest 00:36:19.651 ************************************ 00:36:19.651 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.652 bdev_null0 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.652 [2024-12-10 14:38:20.347194] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:19.652 { 00:36:19.652 "params": { 00:36:19.652 "name": "Nvme$subsystem", 00:36:19.652 "trtype": "$TEST_TRANSPORT", 00:36:19.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:19.652 "adrfam": "ipv4", 00:36:19.652 "trsvcid": "$NVMF_PORT", 00:36:19.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:19.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:19.652 "hdgst": ${hdgst:-false}, 00:36:19.652 "ddgst": ${ddgst:-false} 00:36:19.652 }, 00:36:19.652 "method": "bdev_nvme_attach_controller" 00:36:19.652 } 00:36:19.652 EOF 00:36:19.652 )") 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:19.652 14:38:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:19.652 "params": { 00:36:19.652 "name": "Nvme0", 00:36:19.652 "trtype": "tcp", 00:36:19.652 "traddr": "10.0.0.2", 00:36:19.652 "adrfam": "ipv4", 00:36:19.652 "trsvcid": "4420", 00:36:19.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:19.652 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:19.652 "hdgst": true, 00:36:19.652 "ddgst": true 00:36:19.652 }, 00:36:19.652 "method": "bdev_nvme_attach_controller" 00:36:19.652 }' 00:36:19.928 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:19.928 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:19.928 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:19.928 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:19.928 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:19.928 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:19.928 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:19.928 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:19.928 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:19.928 14:38:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:20.191 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:20.191 ... 00:36:20.191 fio-3.35 00:36:20.191 Starting 3 threads 00:36:32.395 00:36:32.395 filename0: (groupid=0, jobs=1): err= 0: pid=1925884: Tue Dec 10 14:38:31 2024 00:36:32.395 read: IOPS=296, BW=37.1MiB/s (38.9MB/s)(371MiB/10004msec) 00:36:32.395 slat (nsec): min=6220, max=74494, avg=20940.38, stdev=7697.39 00:36:32.395 clat (usec): min=3377, max=13283, avg=10084.63, stdev=955.03 00:36:32.395 lat (usec): min=3384, max=13296, avg=10105.57, stdev=955.36 00:36:32.395 clat percentiles (usec): 00:36:32.395 | 1.00th=[ 6915], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:36:32.395 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:36:32.395 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11469], 00:36:32.395 | 99.00th=[12125], 99.50th=[12256], 99.90th=[12780], 99.95th=[12911], 00:36:32.395 | 99.99th=[13304] 00:36:32.395 bw ( KiB/s): min=35072, max=42496, per=35.90%, avg=38036.21, stdev=1488.61, samples=19 00:36:32.395 iops : min= 274, max= 332, avg=297.16, stdev=11.63, samples=19 00:36:32.395 lat (msec) : 4=0.40%, 10=43.48%, 20=56.11% 00:36:32.395 cpu : usr=95.88%, sys=3.79%, ctx=38, majf=0, minf=49 00:36:32.396 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.396 issued rwts: total=2969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.396 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.396 filename0: (groupid=0, jobs=1): err= 0: pid=1925885: Tue Dec 10 14:38:31 2024 00:36:32.396 read: IOPS=267, BW=33.4MiB/s (35.0MB/s)(336MiB/10047msec) 00:36:32.396 slat (nsec): min=6460, max=56540, avg=17711.12, stdev=7987.18 00:36:32.396 clat (usec): min=7056, max=54734, avg=11188.41, stdev=1839.10 00:36:32.396 lat (usec): min=7073, max=54757, avg=11206.12, stdev=1838.90 00:36:32.396 clat percentiles (usec): 00:36:32.396 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10421], 00:36:32.396 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:36:32.396 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:36:32.396 | 99.00th=[13435], 99.50th=[13698], 99.90th=[46924], 99.95th=[49021], 00:36:32.396 | 99.99th=[54789] 00:36:32.396 bw ( KiB/s): min=32256, max=36352, per=32.41%, avg=34342.40, stdev=1001.09, samples=20 00:36:32.396 iops : min= 252, max= 284, avg=268.30, stdev= 7.82, samples=20 00:36:32.396 lat (msec) : 10=7.11%, 20=92.70%, 50=0.15%, 100=0.04% 00:36:32.396 cpu : usr=96.97%, sys=2.70%, ctx=20, majf=0, minf=83 00:36:32.396 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.396 issued rwts: total=2685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.396 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.396 filename0: (groupid=0, jobs=1): err= 0: pid=1925886: Tue Dec 10 14:38:31 2024 00:36:32.396 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(333MiB/10047msec) 00:36:32.396 slat (nsec): min=6455, max=48849, avg=18328.74, stdev=8260.01 00:36:32.396 clat (usec): min=6433, max=57363, avg=11284.91, stdev=2059.43 00:36:32.396 lat (usec): min=6440, max=57393, avg=11303.23, stdev=2060.04 00:36:32.396 clat percentiles (usec): 00:36:32.396 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:36:32.396 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:36:32.396 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12387], 95.00th=[12780], 00:36:32.396 | 99.00th=[13566], 99.50th=[13960], 99.90th=[57410], 99.95th=[57410], 00:36:32.396 | 99.99th=[57410] 00:36:32.396 bw ( KiB/s): min=31232, max=36608, per=32.14%, avg=34048.00, stdev=1286.72, samples=20 00:36:32.396 iops : min= 244, max= 286, avg=266.00, stdev=10.05, samples=20 00:36:32.396 lat (msec) : 10=6.87%, 20=92.94%, 50=0.04%, 100=0.15% 00:36:32.396 cpu : usr=97.01%, sys=2.67%, ctx=17, majf=0, minf=80 00:36:32.396 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.396 issued rwts: total=2662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.396 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.396 00:36:32.396 Run status group 0 (all jobs): 00:36:32.396 READ: bw=103MiB/s (108MB/s), 33.1MiB/s-37.1MiB/s (34.7MB/s-38.9MB/s), io=1040MiB (1090MB), run=10004-10047msec 00:36:32.396 14:38:31 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:32.396 14:38:31 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:32.396 14:38:31 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:32.396 14:38:31 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:32.396 14:38:31 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:32.396 14:38:31 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:32.396 14:38:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.396 14:38:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:32.396 14:38:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.396 14:38:31 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:32.396 14:38:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.396 14:38:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:32.396 14:38:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.396 00:36:32.396 real 0m11.199s 00:36:32.396 user 0m35.822s 00:36:32.396 sys 0m1.253s 00:36:32.396 14:38:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:32.396 14:38:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:32.396 ************************************ 00:36:32.396 END TEST fio_dif_digest 00:36:32.396 ************************************ 00:36:32.396 14:38:31 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:32.396 14:38:31 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:32.396 14:38:31 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:32.396 14:38:31 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:32.396 14:38:31 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:32.396 14:38:31 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:32.396 14:38:31 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:32.396 14:38:31 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:32.396 rmmod nvme_tcp 00:36:32.396 rmmod nvme_fabrics 00:36:32.396 rmmod nvme_keyring 00:36:32.396 14:38:31 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:32.396 14:38:31 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:32.396 14:38:31 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:32.396 14:38:31 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1917393 ']' 00:36:32.396 14:38:31 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1917393 00:36:32.396 14:38:31 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1917393 ']' 00:36:32.396 14:38:31 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1917393 00:36:32.396 14:38:31 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:36:32.396 14:38:31 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:32.396 14:38:31 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1917393 00:36:32.396 14:38:31 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:32.396 14:38:31 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:32.396 14:38:31 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1917393' 00:36:32.396 killing process with pid 1917393 00:36:32.396 14:38:31 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1917393 00:36:32.396 14:38:31 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1917393 00:36:32.396 14:38:31 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:32.396 14:38:31 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:34.297 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:36:34.297 Waiting for block devices as requested 00:36:34.556 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:34.556 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:34.556 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:34.815 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:34.815 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:34.815 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:35.072 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:35.072 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:35.072 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:35.072 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:35.330 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:35.330 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:35.330 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:35.589 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:35.589 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:35.589 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:35.847 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:35.847 14:38:36 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:35.847 14:38:36 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:35.847 14:38:36 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:35.847 14:38:36 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:36:35.847 14:38:36 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:35.847 14:38:36 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:36:35.847 14:38:36 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:35.847 14:38:36 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:35.847 14:38:36 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:35.847 14:38:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:35.847 14:38:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:38.379 14:38:38 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:38.379 00:36:38.379 real 1m16.194s 00:36:38.379 user 7m10.787s 00:36:38.379 sys 0m20.938s 00:36:38.379 14:38:38 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:38.379 14:38:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:38.379 ************************************ 00:36:38.379 END TEST nvmf_dif 00:36:38.379 ************************************ 00:36:38.379 14:38:38 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:38.379 14:38:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:38.379 14:38:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:38.379 14:38:38 -- common/autotest_common.sh@10 -- # set +x 00:36:38.379 ************************************ 00:36:38.379 START TEST nvmf_abort_qd_sizes 00:36:38.379 ************************************ 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:38.379 * Looking for test storage... 00:36:38.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:38.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.379 --rc genhtml_branch_coverage=1 00:36:38.379 --rc genhtml_function_coverage=1 00:36:38.379 --rc genhtml_legend=1 00:36:38.379 --rc geninfo_all_blocks=1 00:36:38.379 --rc geninfo_unexecuted_blocks=1 00:36:38.379 00:36:38.379 ' 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:38.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.379 --rc genhtml_branch_coverage=1 00:36:38.379 --rc genhtml_function_coverage=1 00:36:38.379 --rc genhtml_legend=1 00:36:38.379 --rc geninfo_all_blocks=1 00:36:38.379 --rc geninfo_unexecuted_blocks=1 00:36:38.379 00:36:38.379 ' 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:38.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.379 --rc genhtml_branch_coverage=1 00:36:38.379 --rc genhtml_function_coverage=1 00:36:38.379 --rc genhtml_legend=1 00:36:38.379 --rc geninfo_all_blocks=1 00:36:38.379 --rc geninfo_unexecuted_blocks=1 00:36:38.379 00:36:38.379 ' 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:38.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.379 --rc genhtml_branch_coverage=1 00:36:38.379 --rc genhtml_function_coverage=1 00:36:38.379 --rc genhtml_legend=1 00:36:38.379 --rc geninfo_all_blocks=1 00:36:38.379 --rc geninfo_unexecuted_blocks=1 00:36:38.379 00:36:38.379 ' 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:38.379 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:38.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:36:38.380 14:38:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:44.942 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:44.942 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:44.942 Found net devices under 0000:af:00.0: cvl_0_0 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:44.942 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:44.943 Found net devices under 0000:af:00.1: cvl_0_1 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:44.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:44.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:36:44.943 00:36:44.943 --- 10.0.0.2 ping statistics --- 00:36:44.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.943 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:44.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:44.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:36:44.943 00:36:44.943 --- 10.0.0.1 ping statistics --- 00:36:44.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.943 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:44.943 14:38:45 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:48.226 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:36:48.226 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:48.226 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:48.226 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:48.226 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:48.226 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:48.226 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:48.226 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:48.226 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:48.226 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:48.226 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:48.226 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:48.226 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:48.226 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:48.226 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:48.226 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:48.226 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:49.161 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1934581 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1934581 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1934581 ']' 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:49.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:49.161 14:38:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:49.419 [2024-12-10 14:38:49.927036] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:36:49.419 [2024-12-10 14:38:49.927083] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:49.419 [2024-12-10 14:38:50.018105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:49.419 [2024-12-10 14:38:50.066892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:49.419 [2024-12-10 14:38:50.066929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:49.419 [2024-12-10 14:38:50.066936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:49.419 [2024-12-10 14:38:50.066942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:49.419 [2024-12-10 14:38:50.066947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:49.419 [2024-12-10 14:38:50.068420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:49.419 [2024-12-10 14:38:50.068530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:49.419 [2024-12-10 14:38:50.068635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:49.419 [2024-12-10 14:38:50.068636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 0000:5f:00.0 ]] 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5f:00.0 ]] 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- scripts/common.sh@324 -- # continue 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:50.350 14:38:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:50.350 ************************************ 00:36:50.350 START TEST spdk_target_abort 00:36:50.351 ************************************ 00:36:50.351 14:38:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:36:50.351 14:38:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:50.351 14:38:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:36:50.351 14:38:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.351 14:38:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:53.624 spdk_targetn1 00:36:53.624 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.624 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:53.624 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.624 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:53.625 [2024-12-10 14:38:53.702752] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:53.625 [2024-12-10 14:38:53.751071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:53.625 14:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:56.898 Initializing NVMe Controllers 00:36:56.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:56.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:56.898 Initialization complete. Launching workers. 00:36:56.898 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14710, failed: 0 00:36:56.899 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1241, failed to submit 13469 00:36:56.899 success 703, unsuccessful 538, failed 0 00:36:56.899 14:38:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:56.899 14:38:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:00.175 Initializing NVMe Controllers 00:37:00.175 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:00.175 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:00.175 Initialization complete. Launching workers. 00:37:00.175 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8784, failed: 0 00:37:00.175 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1248, failed to submit 7536 00:37:00.175 success 331, unsuccessful 917, failed 0 00:37:00.175 14:39:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:00.175 14:39:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:03.447 Initializing NVMe Controllers 00:37:03.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:03.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:03.447 Initialization complete. Launching workers. 00:37:03.447 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38507, failed: 0 00:37:03.447 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2871, failed to submit 35636 00:37:03.447 success 603, unsuccessful 2268, failed 0 00:37:03.447 14:39:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:03.447 14:39:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.447 14:39:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.447 14:39:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.447 14:39:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:03.447 14:39:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.447 14:39:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:04.377 14:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.377 14:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1934581 00:37:04.377 14:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1934581 ']' 00:37:04.377 14:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1934581 00:37:04.377 14:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:37:04.377 14:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:04.377 14:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1934581 00:37:04.377 14:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:04.377 14:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:04.377 14:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1934581' 00:37:04.377 killing process with pid 1934581 00:37:04.377 14:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1934581 00:37:04.377 14:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1934581 00:37:04.636 00:37:04.636 real 0m14.268s 00:37:04.636 user 0m56.915s 00:37:04.636 sys 0m2.552s 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:04.636 ************************************ 00:37:04.636 END TEST spdk_target_abort 00:37:04.636 ************************************ 00:37:04.636 14:39:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:04.636 14:39:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:04.636 14:39:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:04.636 14:39:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:04.636 ************************************ 00:37:04.636 START TEST kernel_target_abort 00:37:04.636 ************************************ 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:04.636 14:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:07.922 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:37:07.922 Waiting for block devices as requested 00:37:07.922 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:37:07.922 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:08.181 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:08.181 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:08.181 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:08.181 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:08.439 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:08.439 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:08.439 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:08.698 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:08.698 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:08.698 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:08.956 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:08.956 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:08.956 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:08.956 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:09.214 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:09.214 No valid GPT data, bailing 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:37:09.214 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:37:09.472 No valid GPT data, bailing 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # continue 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:09.472 14:39:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:09.472 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:37:09.472 00:37:09.472 Discovery Log Number of Records 2, Generation counter 2 00:37:09.472 =====Discovery Log Entry 0====== 00:37:09.472 trtype: tcp 00:37:09.472 adrfam: ipv4 00:37:09.472 subtype: current discovery subsystem 00:37:09.472 treq: not specified, sq flow control disable supported 00:37:09.472 portid: 1 00:37:09.472 trsvcid: 4420 00:37:09.472 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:09.472 traddr: 10.0.0.1 00:37:09.472 eflags: none 00:37:09.472 sectype: none 00:37:09.473 =====Discovery Log Entry 1====== 00:37:09.473 trtype: tcp 00:37:09.473 adrfam: ipv4 00:37:09.473 subtype: nvme subsystem 00:37:09.473 treq: not specified, sq flow control disable supported 00:37:09.473 portid: 1 00:37:09.473 trsvcid: 4420 00:37:09.473 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:09.473 traddr: 10.0.0.1 00:37:09.473 eflags: none 00:37:09.473 sectype: none 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:09.473 14:39:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:12.750 Initializing NVMe Controllers 00:37:12.750 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:12.750 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:12.750 Initialization complete. Launching workers. 00:37:12.750 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80065, failed: 0 00:37:12.750 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 80065, failed to submit 0 00:37:12.750 success 0, unsuccessful 80065, failed 0 00:37:12.750 14:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:12.750 14:39:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:16.026 Initializing NVMe Controllers 00:37:16.026 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:16.026 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:16.026 Initialization complete. Launching workers. 00:37:16.026 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 139054, failed: 0 00:37:16.026 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32262, failed to submit 106792 00:37:16.026 success 0, unsuccessful 32262, failed 0 00:37:16.026 14:39:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:16.026 14:39:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:19.390 Initializing NVMe Controllers 00:37:19.390 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:19.390 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:19.390 Initialization complete. Launching workers. 00:37:19.390 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 129362, failed: 0 00:37:19.390 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32370, failed to submit 96992 00:37:19.390 success 0, unsuccessful 32370, failed 0 00:37:19.390 14:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:19.390 14:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:19.390 14:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:37:19.390 14:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:19.391 14:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:19.391 14:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:19.391 14:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:19.391 14:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:19.391 14:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:19.391 14:39:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:21.923 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:37:22.181 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:22.181 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:22.181 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:22.181 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:22.181 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:22.181 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:22.181 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:22.181 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:22.181 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:22.181 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:22.181 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:22.181 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:22.181 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:22.181 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:22.181 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:22.181 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:23.114 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:37:23.114 00:37:23.114 real 0m18.613s 00:37:23.114 user 0m9.000s 00:37:23.114 sys 0m5.962s 00:37:23.114 14:39:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:23.114 14:39:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:23.114 ************************************ 00:37:23.114 END TEST kernel_target_abort 00:37:23.114 ************************************ 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:23.373 rmmod nvme_tcp 00:37:23.373 rmmod nvme_fabrics 00:37:23.373 rmmod nvme_keyring 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1934581 ']' 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1934581 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1934581 ']' 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1934581 00:37:23.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1934581) - No such process 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1934581 is not found' 00:37:23.373 Process with pid 1934581 is not found 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:23.373 14:39:23 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:26.657 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:37:26.657 Waiting for block devices as requested 00:37:26.657 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:37:26.657 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:26.915 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:26.915 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:26.915 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:26.915 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:27.174 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:27.174 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:27.174 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:27.432 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:27.432 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:27.432 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:27.432 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:27.690 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:27.690 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:27.690 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:27.949 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:27.949 14:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:27.949 14:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:27.949 14:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:27.949 14:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:37:27.949 14:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:27.949 14:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:37:27.949 14:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:27.949 14:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:27.949 14:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:27.949 14:39:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:27.949 14:39:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:30.481 14:39:30 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:30.481 00:37:30.481 real 0m52.029s 00:37:30.481 user 1m11.062s 00:37:30.481 sys 0m18.484s 00:37:30.481 14:39:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:30.481 14:39:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:30.481 ************************************ 00:37:30.481 END TEST nvmf_abort_qd_sizes 00:37:30.482 ************************************ 00:37:30.482 14:39:30 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:30.482 14:39:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:30.482 14:39:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:30.482 14:39:30 -- common/autotest_common.sh@10 -- # set +x 00:37:30.482 ************************************ 00:37:30.482 START TEST keyring_file 00:37:30.482 ************************************ 00:37:30.482 14:39:30 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:30.482 * Looking for test storage... 00:37:30.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:30.482 14:39:30 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:30.482 14:39:30 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:37:30.482 14:39:30 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:30.482 14:39:30 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:30.482 14:39:30 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:30.482 14:39:30 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:30.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.482 --rc genhtml_branch_coverage=1 00:37:30.482 --rc genhtml_function_coverage=1 00:37:30.482 --rc genhtml_legend=1 00:37:30.482 --rc geninfo_all_blocks=1 00:37:30.482 --rc geninfo_unexecuted_blocks=1 00:37:30.482 00:37:30.482 ' 00:37:30.482 14:39:30 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:30.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.482 --rc genhtml_branch_coverage=1 00:37:30.482 --rc genhtml_function_coverage=1 00:37:30.482 --rc genhtml_legend=1 00:37:30.482 --rc geninfo_all_blocks=1 00:37:30.482 --rc geninfo_unexecuted_blocks=1 00:37:30.482 00:37:30.482 ' 00:37:30.482 14:39:30 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:30.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.482 --rc genhtml_branch_coverage=1 00:37:30.482 --rc genhtml_function_coverage=1 00:37:30.482 --rc genhtml_legend=1 00:37:30.482 --rc geninfo_all_blocks=1 00:37:30.482 --rc geninfo_unexecuted_blocks=1 00:37:30.482 00:37:30.482 ' 00:37:30.482 14:39:30 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:30.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.482 --rc genhtml_branch_coverage=1 00:37:30.482 --rc genhtml_function_coverage=1 00:37:30.482 --rc genhtml_legend=1 00:37:30.482 --rc geninfo_all_blocks=1 00:37:30.482 --rc geninfo_unexecuted_blocks=1 00:37:30.482 00:37:30.482 ' 00:37:30.482 14:39:30 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:30.482 14:39:30 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:30.482 14:39:30 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:30.482 14:39:30 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.482 14:39:30 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.482 14:39:30 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.482 14:39:30 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:30.482 14:39:30 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:30.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:30.482 14:39:30 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:30.482 14:39:30 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:30.482 14:39:30 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:30.482 14:39:30 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:30.482 14:39:30 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:30.482 14:39:30 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:30.482 14:39:30 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:30.482 14:39:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:30.482 14:39:30 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:30.482 14:39:30 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:30.482 14:39:30 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:30.482 14:39:30 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:30.482 14:39:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0ECHpJitkG 00:37:30.482 14:39:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:30.482 14:39:30 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:30.483 14:39:30 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:30.483 14:39:30 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:30.483 14:39:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0ECHpJitkG 00:37:30.483 14:39:30 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0ECHpJitkG 00:37:30.483 14:39:30 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.0ECHpJitkG 00:37:30.483 14:39:30 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:30.483 14:39:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:30.483 14:39:30 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:30.483 14:39:30 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:30.483 14:39:30 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:30.483 14:39:30 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:30.483 14:39:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7rfozAY6Zf 00:37:30.483 14:39:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:30.483 14:39:30 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:30.483 14:39:30 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:30.483 14:39:30 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:30.483 14:39:30 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:30.483 14:39:30 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:30.483 14:39:30 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:30.483 14:39:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7rfozAY6Zf 00:37:30.483 14:39:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7rfozAY6Zf 00:37:30.483 14:39:31 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.7rfozAY6Zf 00:37:30.483 14:39:31 keyring_file -- keyring/file.sh@30 -- # tgtpid=1944382 00:37:30.483 14:39:31 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:30.483 14:39:31 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1944382 00:37:30.483 14:39:31 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1944382 ']' 00:37:30.483 14:39:31 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:30.483 14:39:31 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:30.483 14:39:31 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:30.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:30.483 14:39:31 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:30.483 14:39:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:30.483 [2024-12-10 14:39:31.055840] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:37:30.483 [2024-12-10 14:39:31.055893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1944382 ] 00:37:30.483 [2024-12-10 14:39:31.136008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.483 [2024-12-10 14:39:31.176558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:30.741 14:39:31 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:30.741 [2024-12-10 14:39:31.398732] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:30.741 null0 00:37:30.741 [2024-12-10 14:39:31.430785] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:30.741 [2024-12-10 14:39:31.431063] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.741 14:39:31 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:30.741 [2024-12-10 14:39:31.458850] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:30.741 request: 00:37:30.741 { 00:37:30.741 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:30.741 "secure_channel": false, 00:37:30.741 "listen_address": { 00:37:30.741 "trtype": "tcp", 00:37:30.741 "traddr": "127.0.0.1", 00:37:30.741 "trsvcid": "4420" 00:37:30.741 }, 00:37:30.741 "method": "nvmf_subsystem_add_listener", 00:37:30.741 "req_id": 1 00:37:30.741 } 00:37:30.741 Got JSON-RPC error response 00:37:30.741 response: 00:37:30.741 { 00:37:30.741 "code": -32602, 00:37:30.741 "message": "Invalid parameters" 00:37:30.741 } 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:30.741 14:39:31 keyring_file -- keyring/file.sh@47 -- # bperfpid=1944604 00:37:30.741 14:39:31 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1944604 /var/tmp/bperf.sock 00:37:30.741 14:39:31 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1944604 ']' 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:30.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:30.741 14:39:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:30.999 [2024-12-10 14:39:31.512477] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:37:30.999 [2024-12-10 14:39:31.512520] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1944604 ] 00:37:30.999 [2024-12-10 14:39:31.590828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.999 [2024-12-10 14:39:31.631854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:30.999 14:39:31 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:30.999 14:39:31 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:30.999 14:39:31 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0ECHpJitkG 00:37:30.999 14:39:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0ECHpJitkG 00:37:31.256 14:39:31 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.7rfozAY6Zf 00:37:31.256 14:39:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.7rfozAY6Zf 00:37:31.514 14:39:32 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:31.514 14:39:32 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:31.514 14:39:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.514 14:39:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:31.514 14:39:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.771 14:39:32 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.0ECHpJitkG == \/\t\m\p\/\t\m\p\.\0\E\C\H\p\J\i\t\k\G ]] 00:37:31.771 14:39:32 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:31.771 14:39:32 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:31.771 14:39:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.771 14:39:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:31.771 14:39:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.028 14:39:32 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.7rfozAY6Zf == \/\t\m\p\/\t\m\p\.\7\r\f\o\z\A\Y\6\Z\f ]] 00:37:32.028 14:39:32 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:32.028 14:39:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:32.028 14:39:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:32.028 14:39:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:32.028 14:39:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:32.028 14:39:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.028 14:39:32 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:32.028 14:39:32 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:32.028 14:39:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:32.028 14:39:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:32.028 14:39:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:32.028 14:39:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:32.028 14:39:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.285 14:39:32 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:32.285 14:39:32 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:32.285 14:39:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:32.542 [2024-12-10 14:39:33.075136] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:32.542 nvme0n1 00:37:32.542 14:39:33 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:32.542 14:39:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:32.543 14:39:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:32.543 14:39:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:32.543 14:39:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:32.543 14:39:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.800 14:39:33 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:32.800 14:39:33 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:32.800 14:39:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:32.800 14:39:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:32.800 14:39:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:32.800 14:39:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:32.800 14:39:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.057 14:39:33 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:33.057 14:39:33 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:33.057 Running I/O for 1 seconds... 00:37:33.989 19549.00 IOPS, 76.36 MiB/s 00:37:33.989 Latency(us) 00:37:33.989 [2024-12-10T13:39:34.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.989 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:33.989 nvme0n1 : 1.00 19597.42 76.55 0.00 0.00 6519.98 2668.25 9487.12 00:37:33.989 [2024-12-10T13:39:34.729Z] =================================================================================================================== 00:37:33.989 [2024-12-10T13:39:34.729Z] Total : 19597.42 76.55 0.00 0.00 6519.98 2668.25 9487.12 00:37:33.989 { 00:37:33.989 "results": [ 00:37:33.989 { 00:37:33.989 "job": "nvme0n1", 00:37:33.989 "core_mask": "0x2", 00:37:33.989 "workload": "randrw", 00:37:33.989 "percentage": 50, 00:37:33.989 "status": "finished", 00:37:33.989 "queue_depth": 128, 00:37:33.989 "io_size": 4096, 00:37:33.989 "runtime": 1.004112, 00:37:33.989 "iops": 19597.415427761047, 00:37:33.989 "mibps": 76.55240401469159, 00:37:33.989 "io_failed": 0, 00:37:33.989 "io_timeout": 0, 00:37:33.989 "avg_latency_us": 6519.975051277956, 00:37:33.989 "min_latency_us": 2668.2514285714287, 00:37:33.989 "max_latency_us": 9487.11619047619 00:37:33.989 } 00:37:33.989 ], 00:37:33.989 "core_count": 1 00:37:33.989 } 00:37:33.989 14:39:34 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:33.989 14:39:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:34.246 14:39:34 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:34.246 14:39:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:34.246 14:39:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.246 14:39:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.246 14:39:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:34.246 14:39:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.509 14:39:35 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:34.509 14:39:35 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:34.509 14:39:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.509 14:39:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:34.509 14:39:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.509 14:39:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.509 14:39:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:34.766 14:39:35 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:34.766 14:39:35 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:34.766 14:39:35 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:34.766 14:39:35 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:34.766 14:39:35 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:34.766 14:39:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:34.766 14:39:35 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:34.766 14:39:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:34.766 14:39:35 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:34.766 14:39:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:34.766 [2024-12-10 14:39:35.433648] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:34.766 [2024-12-10 14:39:35.434360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec8770 (107): Transport endpoint is not connected 00:37:34.766 [2024-12-10 14:39:35.435355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec8770 (9): Bad file descriptor 00:37:34.766 [2024-12-10 14:39:35.436356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:34.766 [2024-12-10 14:39:35.436367] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:34.766 [2024-12-10 14:39:35.436374] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:34.766 [2024-12-10 14:39:35.436383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:34.766 request: 00:37:34.766 { 00:37:34.766 "name": "nvme0", 00:37:34.766 "trtype": "tcp", 00:37:34.766 "traddr": "127.0.0.1", 00:37:34.766 "adrfam": "ipv4", 00:37:34.766 "trsvcid": "4420", 00:37:34.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:34.766 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:34.766 "prchk_reftag": false, 00:37:34.766 "prchk_guard": false, 00:37:34.766 "hdgst": false, 00:37:34.766 "ddgst": false, 00:37:34.766 "psk": "key1", 00:37:34.766 "allow_unrecognized_csi": false, 00:37:34.766 "method": "bdev_nvme_attach_controller", 00:37:34.766 "req_id": 1 00:37:34.766 } 00:37:34.766 Got JSON-RPC error response 00:37:34.766 response: 00:37:34.766 { 00:37:34.766 "code": -5, 00:37:34.766 "message": "Input/output error" 00:37:34.766 } 00:37:34.766 14:39:35 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:34.766 14:39:35 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:34.766 14:39:35 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:34.766 14:39:35 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:34.766 14:39:35 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:34.766 14:39:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:34.766 14:39:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.766 14:39:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:34.766 14:39:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.766 14:39:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.023 14:39:35 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:35.023 14:39:35 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:35.023 14:39:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:35.023 14:39:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:35.023 14:39:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:35.023 14:39:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:35.023 14:39:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.279 14:39:35 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:35.279 14:39:35 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:35.279 14:39:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:35.536 14:39:36 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:35.536 14:39:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:35.536 14:39:36 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:35.536 14:39:36 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:35.536 14:39:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.793 14:39:36 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:35.793 14:39:36 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.0ECHpJitkG 00:37:35.793 14:39:36 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.0ECHpJitkG 00:37:35.793 14:39:36 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:35.793 14:39:36 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.0ECHpJitkG 00:37:35.793 14:39:36 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:35.793 14:39:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:35.793 14:39:36 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:35.793 14:39:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:35.793 14:39:36 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0ECHpJitkG 00:37:35.793 14:39:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0ECHpJitkG 00:37:36.050 [2024-12-10 14:39:36.625557] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.0ECHpJitkG': 0100660 00:37:36.050 [2024-12-10 14:39:36.625582] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:36.050 request: 00:37:36.050 { 00:37:36.050 "name": "key0", 00:37:36.050 "path": "/tmp/tmp.0ECHpJitkG", 00:37:36.050 "method": "keyring_file_add_key", 00:37:36.050 "req_id": 1 00:37:36.050 } 00:37:36.050 Got JSON-RPC error response 00:37:36.050 response: 00:37:36.050 { 00:37:36.050 "code": -1, 00:37:36.050 "message": "Operation not permitted" 00:37:36.050 } 00:37:36.050 14:39:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:36.050 14:39:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:36.050 14:39:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:36.050 14:39:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:36.050 14:39:36 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.0ECHpJitkG 00:37:36.050 14:39:36 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0ECHpJitkG 00:37:36.050 14:39:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0ECHpJitkG 00:37:36.307 14:39:36 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.0ECHpJitkG 00:37:36.307 14:39:36 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:36.307 14:39:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:36.307 14:39:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:36.307 14:39:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:36.307 14:39:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:36.307 14:39:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:36.307 14:39:37 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:36.307 14:39:37 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:36.307 14:39:37 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:36.307 14:39:37 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:36.307 14:39:37 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:36.307 14:39:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:36.307 14:39:37 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:36.307 14:39:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:36.307 14:39:37 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:36.307 14:39:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:36.564 [2024-12-10 14:39:37.211121] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.0ECHpJitkG': No such file or directory 00:37:36.564 [2024-12-10 14:39:37.211144] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:36.564 [2024-12-10 14:39:37.211160] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:36.564 [2024-12-10 14:39:37.211167] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:36.564 [2024-12-10 14:39:37.211179] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:36.564 [2024-12-10 14:39:37.211186] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:36.564 request: 00:37:36.564 { 00:37:36.564 "name": "nvme0", 00:37:36.564 "trtype": "tcp", 00:37:36.564 "traddr": "127.0.0.1", 00:37:36.564 "adrfam": "ipv4", 00:37:36.564 "trsvcid": "4420", 00:37:36.564 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:36.564 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:36.564 "prchk_reftag": false, 00:37:36.564 "prchk_guard": false, 00:37:36.564 "hdgst": false, 00:37:36.564 "ddgst": false, 00:37:36.564 "psk": "key0", 00:37:36.564 "allow_unrecognized_csi": false, 00:37:36.564 "method": "bdev_nvme_attach_controller", 00:37:36.564 "req_id": 1 00:37:36.564 } 00:37:36.564 Got JSON-RPC error response 00:37:36.564 response: 00:37:36.564 { 00:37:36.564 "code": -19, 00:37:36.564 "message": "No such device" 00:37:36.564 } 00:37:36.564 14:39:37 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:36.564 14:39:37 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:36.564 14:39:37 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:36.564 14:39:37 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:36.564 14:39:37 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:36.564 14:39:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:36.821 14:39:37 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:36.821 14:39:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:36.821 14:39:37 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:36.821 14:39:37 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:36.821 14:39:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:36.821 14:39:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:36.821 14:39:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.whGUcm6vD9 00:37:36.821 14:39:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:36.821 14:39:37 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:36.821 14:39:37 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:36.821 14:39:37 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:36.821 14:39:37 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:36.821 14:39:37 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:36.821 14:39:37 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:36.821 14:39:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.whGUcm6vD9 00:37:36.821 14:39:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.whGUcm6vD9 00:37:36.821 14:39:37 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.whGUcm6vD9 00:37:36.821 14:39:37 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.whGUcm6vD9 00:37:36.821 14:39:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.whGUcm6vD9 00:37:37.077 14:39:37 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:37.077 14:39:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:37.334 nvme0n1 00:37:37.334 14:39:37 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:37.334 14:39:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:37.334 14:39:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:37.334 14:39:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:37.334 14:39:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:37.334 14:39:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.592 14:39:38 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:37.592 14:39:38 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:37.592 14:39:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:37.592 14:39:38 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:37.592 14:39:38 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:37.592 14:39:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:37.592 14:39:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:37.592 14:39:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.849 14:39:38 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:37.849 14:39:38 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:37.849 14:39:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:37.849 14:39:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:37.849 14:39:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:37.849 14:39:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:37.849 14:39:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:38.106 14:39:38 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:38.106 14:39:38 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:38.106 14:39:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:38.363 14:39:38 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:38.363 14:39:38 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:38.363 14:39:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:38.363 14:39:39 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:38.363 14:39:39 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.whGUcm6vD9 00:37:38.363 14:39:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.whGUcm6vD9 00:37:38.620 14:39:39 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.7rfozAY6Zf 00:37:38.620 14:39:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.7rfozAY6Zf 00:37:38.877 14:39:39 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:38.877 14:39:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:39.134 nvme0n1 00:37:39.134 14:39:39 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:39.134 14:39:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:39.392 14:39:39 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:39.392 "subsystems": [ 00:37:39.392 { 00:37:39.392 "subsystem": "keyring", 00:37:39.392 "config": [ 00:37:39.392 { 00:37:39.392 "method": "keyring_file_add_key", 00:37:39.392 "params": { 00:37:39.392 "name": "key0", 00:37:39.392 "path": "/tmp/tmp.whGUcm6vD9" 00:37:39.392 } 00:37:39.392 }, 00:37:39.392 { 00:37:39.392 "method": "keyring_file_add_key", 00:37:39.392 "params": { 00:37:39.392 "name": "key1", 00:37:39.392 "path": "/tmp/tmp.7rfozAY6Zf" 00:37:39.392 } 00:37:39.392 } 00:37:39.392 ] 00:37:39.392 }, 00:37:39.392 { 00:37:39.392 "subsystem": "iobuf", 00:37:39.392 "config": [ 00:37:39.392 { 00:37:39.392 "method": "iobuf_set_options", 00:37:39.392 "params": { 00:37:39.392 "small_pool_count": 8192, 00:37:39.392 "large_pool_count": 1024, 00:37:39.392 "small_bufsize": 8192, 00:37:39.392 "large_bufsize": 135168, 00:37:39.392 "enable_numa": false 00:37:39.392 } 00:37:39.392 } 00:37:39.392 ] 00:37:39.392 }, 00:37:39.392 { 00:37:39.392 "subsystem": "sock", 00:37:39.392 "config": [ 00:37:39.392 { 00:37:39.392 "method": "sock_set_default_impl", 00:37:39.392 "params": { 00:37:39.392 "impl_name": "posix" 00:37:39.392 } 00:37:39.392 }, 00:37:39.392 { 00:37:39.392 "method": "sock_impl_set_options", 00:37:39.392 "params": { 00:37:39.392 "impl_name": "ssl", 00:37:39.392 "recv_buf_size": 4096, 00:37:39.392 "send_buf_size": 4096, 00:37:39.392 "enable_recv_pipe": true, 00:37:39.392 "enable_quickack": false, 00:37:39.392 "enable_placement_id": 0, 00:37:39.392 "enable_zerocopy_send_server": true, 00:37:39.392 "enable_zerocopy_send_client": false, 00:37:39.392 "zerocopy_threshold": 0, 00:37:39.392 "tls_version": 0, 00:37:39.392 "enable_ktls": false 00:37:39.392 } 00:37:39.392 }, 00:37:39.392 { 00:37:39.392 "method": "sock_impl_set_options", 00:37:39.392 "params": { 00:37:39.392 "impl_name": "posix", 00:37:39.392 "recv_buf_size": 2097152, 00:37:39.392 "send_buf_size": 2097152, 00:37:39.392 "enable_recv_pipe": true, 00:37:39.392 "enable_quickack": false, 00:37:39.393 "enable_placement_id": 0, 00:37:39.393 "enable_zerocopy_send_server": true, 00:37:39.393 "enable_zerocopy_send_client": false, 00:37:39.393 "zerocopy_threshold": 0, 00:37:39.393 "tls_version": 0, 00:37:39.393 "enable_ktls": false 00:37:39.393 } 00:37:39.393 } 00:37:39.393 ] 00:37:39.393 }, 00:37:39.393 { 00:37:39.393 "subsystem": "vmd", 00:37:39.393 "config": [] 00:37:39.393 }, 00:37:39.393 { 00:37:39.393 "subsystem": "accel", 00:37:39.393 "config": [ 00:37:39.393 { 00:37:39.393 "method": "accel_set_options", 00:37:39.393 "params": { 00:37:39.393 "small_cache_size": 128, 00:37:39.393 "large_cache_size": 16, 00:37:39.393 "task_count": 2048, 00:37:39.393 "sequence_count": 2048, 00:37:39.393 "buf_count": 2048 00:37:39.393 } 00:37:39.393 } 00:37:39.393 ] 00:37:39.393 }, 00:37:39.393 { 00:37:39.393 "subsystem": "bdev", 00:37:39.393 "config": [ 00:37:39.393 { 00:37:39.393 "method": "bdev_set_options", 00:37:39.393 "params": { 00:37:39.393 "bdev_io_pool_size": 65535, 00:37:39.393 "bdev_io_cache_size": 256, 00:37:39.393 "bdev_auto_examine": true, 00:37:39.393 "iobuf_small_cache_size": 128, 00:37:39.393 "iobuf_large_cache_size": 16 00:37:39.393 } 00:37:39.393 }, 00:37:39.393 { 00:37:39.393 "method": "bdev_raid_set_options", 00:37:39.393 "params": { 00:37:39.393 "process_window_size_kb": 1024, 00:37:39.393 "process_max_bandwidth_mb_sec": 0 00:37:39.393 } 00:37:39.393 }, 00:37:39.393 { 00:37:39.393 "method": "bdev_iscsi_set_options", 00:37:39.393 "params": { 00:37:39.393 "timeout_sec": 30 00:37:39.393 } 00:37:39.393 }, 00:37:39.393 { 00:37:39.393 "method": "bdev_nvme_set_options", 00:37:39.393 "params": { 00:37:39.393 "action_on_timeout": "none", 00:37:39.393 "timeout_us": 0, 00:37:39.393 "timeout_admin_us": 0, 00:37:39.393 "keep_alive_timeout_ms": 10000, 00:37:39.393 "arbitration_burst": 0, 00:37:39.393 "low_priority_weight": 0, 00:37:39.393 "medium_priority_weight": 0, 00:37:39.393 "high_priority_weight": 0, 00:37:39.393 "nvme_adminq_poll_period_us": 10000, 00:37:39.393 "nvme_ioq_poll_period_us": 0, 00:37:39.393 "io_queue_requests": 512, 00:37:39.393 "delay_cmd_submit": true, 00:37:39.393 "transport_retry_count": 4, 00:37:39.393 "bdev_retry_count": 3, 00:37:39.393 "transport_ack_timeout": 0, 00:37:39.393 "ctrlr_loss_timeout_sec": 0, 00:37:39.393 "reconnect_delay_sec": 0, 00:37:39.393 "fast_io_fail_timeout_sec": 0, 00:37:39.393 "disable_auto_failback": false, 00:37:39.393 "generate_uuids": false, 00:37:39.393 "transport_tos": 0, 00:37:39.393 "nvme_error_stat": false, 00:37:39.393 "rdma_srq_size": 0, 00:37:39.393 "io_path_stat": false, 00:37:39.393 "allow_accel_sequence": false, 00:37:39.393 "rdma_max_cq_size": 0, 00:37:39.393 "rdma_cm_event_timeout_ms": 0, 00:37:39.393 "dhchap_digests": [ 00:37:39.393 "sha256", 00:37:39.393 "sha384", 00:37:39.393 "sha512" 00:37:39.393 ], 00:37:39.393 "dhchap_dhgroups": [ 00:37:39.393 "null", 00:37:39.393 "ffdhe2048", 00:37:39.393 "ffdhe3072", 00:37:39.393 "ffdhe4096", 00:37:39.393 "ffdhe6144", 00:37:39.393 "ffdhe8192" 00:37:39.393 ] 00:37:39.393 } 00:37:39.393 }, 00:37:39.393 { 00:37:39.393 "method": "bdev_nvme_attach_controller", 00:37:39.393 "params": { 00:37:39.393 "name": "nvme0", 00:37:39.393 "trtype": "TCP", 00:37:39.393 "adrfam": "IPv4", 00:37:39.393 "traddr": "127.0.0.1", 00:37:39.393 "trsvcid": "4420", 00:37:39.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:39.393 "prchk_reftag": false, 00:37:39.393 "prchk_guard": false, 00:37:39.393 "ctrlr_loss_timeout_sec": 0, 00:37:39.393 "reconnect_delay_sec": 0, 00:37:39.393 "fast_io_fail_timeout_sec": 0, 00:37:39.393 "psk": "key0", 00:37:39.393 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:39.393 "hdgst": false, 00:37:39.393 "ddgst": false, 00:37:39.393 "multipath": "multipath" 00:37:39.393 } 00:37:39.393 }, 00:37:39.393 { 00:37:39.393 "method": "bdev_nvme_set_hotplug", 00:37:39.393 "params": { 00:37:39.393 "period_us": 100000, 00:37:39.393 "enable": false 00:37:39.393 } 00:37:39.393 }, 00:37:39.393 { 00:37:39.393 "method": "bdev_wait_for_examine" 00:37:39.393 } 00:37:39.393 ] 00:37:39.393 }, 00:37:39.393 { 00:37:39.393 "subsystem": "nbd", 00:37:39.393 "config": [] 00:37:39.393 } 00:37:39.393 ] 00:37:39.393 }' 00:37:39.393 14:39:39 keyring_file -- keyring/file.sh@115 -- # killprocess 1944604 00:37:39.393 14:39:39 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1944604 ']' 00:37:39.393 14:39:39 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1944604 00:37:39.393 14:39:39 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:39.393 14:39:39 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:39.393 14:39:39 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1944604 00:37:39.393 14:39:40 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:39.393 14:39:40 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:39.393 14:39:40 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1944604' 00:37:39.393 killing process with pid 1944604 00:37:39.393 14:39:40 keyring_file -- common/autotest_common.sh@973 -- # kill 1944604 00:37:39.393 Received shutdown signal, test time was about 1.000000 seconds 00:37:39.393 00:37:39.393 Latency(us) 00:37:39.393 [2024-12-10T13:39:40.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:39.393 [2024-12-10T13:39:40.133Z] =================================================================================================================== 00:37:39.393 [2024-12-10T13:39:40.133Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:39.393 14:39:40 keyring_file -- common/autotest_common.sh@978 -- # wait 1944604 00:37:39.651 14:39:40 keyring_file -- keyring/file.sh@118 -- # bperfpid=1946101 00:37:39.651 14:39:40 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1946101 /var/tmp/bperf.sock 00:37:39.651 14:39:40 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1946101 ']' 00:37:39.651 14:39:40 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:39.651 14:39:40 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:39.651 14:39:40 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:39.651 14:39:40 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:39.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:39.651 14:39:40 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:39.651 "subsystems": [ 00:37:39.651 { 00:37:39.651 "subsystem": "keyring", 00:37:39.651 "config": [ 00:37:39.651 { 00:37:39.652 "method": "keyring_file_add_key", 00:37:39.652 "params": { 00:37:39.652 "name": "key0", 00:37:39.652 "path": "/tmp/tmp.whGUcm6vD9" 00:37:39.652 } 00:37:39.652 }, 00:37:39.652 { 00:37:39.652 "method": "keyring_file_add_key", 00:37:39.652 "params": { 00:37:39.652 "name": "key1", 00:37:39.652 "path": "/tmp/tmp.7rfozAY6Zf" 00:37:39.652 } 00:37:39.652 } 00:37:39.652 ] 00:37:39.652 }, 00:37:39.652 { 00:37:39.652 "subsystem": "iobuf", 00:37:39.652 "config": [ 00:37:39.652 { 00:37:39.652 "method": "iobuf_set_options", 00:37:39.652 "params": { 00:37:39.652 "small_pool_count": 8192, 00:37:39.652 "large_pool_count": 1024, 00:37:39.652 "small_bufsize": 8192, 00:37:39.652 "large_bufsize": 135168, 00:37:39.652 "enable_numa": false 00:37:39.652 } 00:37:39.652 } 00:37:39.652 ] 00:37:39.652 }, 00:37:39.652 { 00:37:39.652 "subsystem": "sock", 00:37:39.652 "config": [ 00:37:39.652 { 00:37:39.652 "method": "sock_set_default_impl", 00:37:39.652 "params": { 00:37:39.652 "impl_name": "posix" 00:37:39.652 } 00:37:39.652 }, 00:37:39.652 { 00:37:39.652 "method": "sock_impl_set_options", 00:37:39.652 "params": { 00:37:39.652 "impl_name": "ssl", 00:37:39.652 "recv_buf_size": 4096, 00:37:39.652 "send_buf_size": 4096, 00:37:39.652 "enable_recv_pipe": true, 00:37:39.652 "enable_quickack": false, 00:37:39.652 "enable_placement_id": 0, 00:37:39.652 "enable_zerocopy_send_server": true, 00:37:39.652 "enable_zerocopy_send_client": false, 00:37:39.652 "zerocopy_threshold": 0, 00:37:39.652 "tls_version": 0, 00:37:39.652 "enable_ktls": false 00:37:39.652 } 00:37:39.652 }, 00:37:39.652 { 00:37:39.652 "method": "sock_impl_set_options", 00:37:39.652 "params": { 00:37:39.652 "impl_name": "posix", 00:37:39.652 "recv_buf_size": 2097152, 00:37:39.652 "send_buf_size": 2097152, 00:37:39.652 "enable_recv_pipe": true, 00:37:39.652 "enable_quickack": false, 00:37:39.652 "enable_placement_id": 0, 00:37:39.652 "enable_zerocopy_send_server": true, 00:37:39.652 "enable_zerocopy_send_client": false, 00:37:39.652 "zerocopy_threshold": 0, 00:37:39.652 "tls_version": 0, 00:37:39.652 "enable_ktls": false 00:37:39.652 } 00:37:39.652 } 00:37:39.652 ] 00:37:39.652 }, 00:37:39.652 { 00:37:39.652 "subsystem": "vmd", 00:37:39.652 "config": [] 00:37:39.652 }, 00:37:39.652 { 00:37:39.652 "subsystem": "accel", 00:37:39.652 "config": [ 00:37:39.652 { 00:37:39.652 "method": "accel_set_options", 00:37:39.652 "params": { 00:37:39.652 "small_cache_size": 128, 00:37:39.652 "large_cache_size": 16, 00:37:39.652 "task_count": 2048, 00:37:39.652 "sequence_count": 2048, 00:37:39.652 "buf_count": 2048 00:37:39.652 } 00:37:39.652 } 00:37:39.652 ] 00:37:39.652 }, 00:37:39.652 { 00:37:39.652 "subsystem": "bdev", 00:37:39.652 "config": [ 00:37:39.652 { 00:37:39.652 "method": "bdev_set_options", 00:37:39.652 "params": { 00:37:39.652 "bdev_io_pool_size": 65535, 00:37:39.652 "bdev_io_cache_size": 256, 00:37:39.652 "bdev_auto_examine": true, 00:37:39.652 "iobuf_small_cache_size": 128, 00:37:39.652 "iobuf_large_cache_size": 16 00:37:39.652 } 00:37:39.652 }, 00:37:39.652 { 00:37:39.652 "method": "bdev_raid_set_options", 00:37:39.652 "params": { 00:37:39.652 "process_window_size_kb": 1024, 00:37:39.652 "process_max_bandwidth_mb_sec": 0 00:37:39.652 } 00:37:39.652 }, 00:37:39.652 { 00:37:39.652 "method": "bdev_iscsi_set_options", 00:37:39.652 "params": { 00:37:39.652 "timeout_sec": 30 00:37:39.652 } 00:37:39.652 }, 00:37:39.652 { 00:37:39.652 "method": "bdev_nvme_set_options", 00:37:39.652 "params": { 00:37:39.652 "action_on_timeout": "none", 00:37:39.652 "timeout_us": 0, 00:37:39.652 "timeout_admin_us": 0, 00:37:39.652 "keep_alive_timeout_ms": 10000, 00:37:39.652 "arbitration_burst": 0, 00:37:39.652 "low_priority_weight": 0, 00:37:39.652 "medium_priority_weight": 0, 00:37:39.652 "high_priority_weight": 0, 00:37:39.652 "nvme_adminq_poll_period_us": 10000, 00:37:39.652 "nvme_ioq_poll_period_us": 0, 00:37:39.652 "io_queue_requests": 512, 00:37:39.652 "delay_cmd_submit": true, 00:37:39.652 "transport_retry_count": 4, 00:37:39.652 "bdev_retry_count": 3, 00:37:39.652 "transport_ack_timeout": 0, 00:37:39.652 "ctrlr_loss_timeout_sec": 0, 00:37:39.652 "reconnect_delay_sec": 0, 00:37:39.652 "fast_io_fail_timeout_sec": 0, 00:37:39.652 "disable_auto_failback": false, 00:37:39.652 "generate_uuids": false, 00:37:39.652 "transport_tos": 0, 00:37:39.652 "nvme_error_stat": false, 00:37:39.652 "rdma_srq_size": 0, 00:37:39.652 "io_path_stat": false, 00:37:39.652 "allow_accel_sequence": false, 00:37:39.652 "rdma_max_cq_size": 0, 00:37:39.652 "rdma_cm_event_timeout_ms": 0, 00:37:39.652 "dhchap_digests": [ 00:37:39.652 "sha256", 00:37:39.652 "sha384", 00:37:39.652 "sha512" 00:37:39.652 ], 00:37:39.652 "dhchap_dhgroups": [ 00:37:39.652 "null", 00:37:39.652 "ffdhe2048", 00:37:39.652 "ffdhe3072", 00:37:39.652 "ffdhe4096", 00:37:39.652 "ffdhe6144", 00:37:39.652 "ffdhe8192" 00:37:39.652 ] 00:37:39.652 } 00:37:39.652 }, 00:37:39.652 { 00:37:39.652 "method": "bdev_nvme_attach_controller", 00:37:39.652 "params": { 00:37:39.652 "name": "nvme0", 00:37:39.652 "trtype": "TCP", 00:37:39.652 "adrfam": "IPv4", 00:37:39.652 "traddr": "127.0.0.1", 00:37:39.652 "trsvcid": "4420", 00:37:39.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:39.652 "prchk_reftag": false, 00:37:39.652 "prchk_guard": false, 00:37:39.652 "ctrlr_loss_timeout_sec": 0, 00:37:39.652 "reconnect_delay_sec": 0, 00:37:39.652 "fast_io_fail_timeout_sec": 0, 00:37:39.652 "psk": "key0", 00:37:39.652 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:39.652 "hdgst": false, 00:37:39.652 "ddgst": false, 00:37:39.652 "multipath": "multipath" 00:37:39.652 } 00:37:39.652 }, 00:37:39.652 { 00:37:39.652 "method": "bdev_nvme_set_hotplug", 00:37:39.652 "params": { 00:37:39.652 "period_us": 100000, 00:37:39.652 "enable": false 00:37:39.652 } 00:37:39.652 }, 00:37:39.652 { 00:37:39.652 "method": "bdev_wait_for_examine" 00:37:39.652 } 00:37:39.652 ] 00:37:39.652 }, 00:37:39.652 { 00:37:39.652 "subsystem": "nbd", 00:37:39.652 "config": [] 00:37:39.652 } 00:37:39.652 ] 00:37:39.652 }' 00:37:39.652 14:39:40 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:39.652 14:39:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:39.652 [2024-12-10 14:39:40.220068] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:37:39.652 [2024-12-10 14:39:40.220118] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1946101 ] 00:37:39.652 [2024-12-10 14:39:40.298065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:39.652 [2024-12-10 14:39:40.338813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:39.910 [2024-12-10 14:39:40.499536] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:40.484 14:39:41 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:40.484 14:39:41 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:40.484 14:39:41 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:40.484 14:39:41 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:40.484 14:39:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:40.744 14:39:41 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:40.744 14:39:41 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:40.744 14:39:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:40.744 14:39:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:40.744 14:39:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:40.744 14:39:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:40.744 14:39:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:40.744 14:39:41 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:40.745 14:39:41 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:40.745 14:39:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:40.745 14:39:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:40.745 14:39:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:40.745 14:39:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:40.745 14:39:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:41.002 14:39:41 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:41.002 14:39:41 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:41.002 14:39:41 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:41.002 14:39:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:41.260 14:39:41 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:41.260 14:39:41 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:41.260 14:39:41 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.whGUcm6vD9 /tmp/tmp.7rfozAY6Zf 00:37:41.260 14:39:41 keyring_file -- keyring/file.sh@20 -- # killprocess 1946101 00:37:41.260 14:39:41 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1946101 ']' 00:37:41.260 14:39:41 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1946101 00:37:41.260 14:39:41 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:41.260 14:39:41 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:41.260 14:39:41 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1946101 00:37:41.260 14:39:41 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:41.260 14:39:41 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:41.260 14:39:41 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1946101' 00:37:41.260 killing process with pid 1946101 00:37:41.260 14:39:41 keyring_file -- common/autotest_common.sh@973 -- # kill 1946101 00:37:41.260 Received shutdown signal, test time was about 1.000000 seconds 00:37:41.260 00:37:41.260 Latency(us) 00:37:41.260 [2024-12-10T13:39:42.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:41.260 [2024-12-10T13:39:42.000Z] =================================================================================================================== 00:37:41.260 [2024-12-10T13:39:42.000Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:41.260 14:39:41 keyring_file -- common/autotest_common.sh@978 -- # wait 1946101 00:37:41.517 14:39:42 keyring_file -- keyring/file.sh@21 -- # killprocess 1944382 00:37:41.517 14:39:42 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1944382 ']' 00:37:41.518 14:39:42 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1944382 00:37:41.518 14:39:42 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:41.518 14:39:42 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:41.518 14:39:42 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1944382 00:37:41.518 14:39:42 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:41.518 14:39:42 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:41.518 14:39:42 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1944382' 00:37:41.518 killing process with pid 1944382 00:37:41.518 14:39:42 keyring_file -- common/autotest_common.sh@973 -- # kill 1944382 00:37:41.518 14:39:42 keyring_file -- common/autotest_common.sh@978 -- # wait 1944382 00:37:41.776 00:37:41.776 real 0m11.729s 00:37:41.776 user 0m29.096s 00:37:41.776 sys 0m2.732s 00:37:41.776 14:39:42 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:41.776 14:39:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:41.776 ************************************ 00:37:41.776 END TEST keyring_file 00:37:41.776 ************************************ 00:37:41.776 14:39:42 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:37:41.776 14:39:42 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:41.776 14:39:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:41.776 14:39:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:41.776 14:39:42 -- common/autotest_common.sh@10 -- # set +x 00:37:41.776 ************************************ 00:37:41.776 START TEST keyring_linux 00:37:41.776 ************************************ 00:37:41.776 14:39:42 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:41.776 Joined session keyring: 1000689342 00:37:42.035 * Looking for test storage... 00:37:42.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:42.035 14:39:42 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:42.035 14:39:42 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:37:42.035 14:39:42 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:42.035 14:39:42 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:42.035 14:39:42 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:42.036 14:39:42 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:42.036 14:39:42 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:42.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.036 --rc genhtml_branch_coverage=1 00:37:42.036 --rc genhtml_function_coverage=1 00:37:42.036 --rc genhtml_legend=1 00:37:42.036 --rc geninfo_all_blocks=1 00:37:42.036 --rc geninfo_unexecuted_blocks=1 00:37:42.036 00:37:42.036 ' 00:37:42.036 14:39:42 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:42.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.036 --rc genhtml_branch_coverage=1 00:37:42.036 --rc genhtml_function_coverage=1 00:37:42.036 --rc genhtml_legend=1 00:37:42.036 --rc geninfo_all_blocks=1 00:37:42.036 --rc geninfo_unexecuted_blocks=1 00:37:42.036 00:37:42.036 ' 00:37:42.036 14:39:42 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:42.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.036 --rc genhtml_branch_coverage=1 00:37:42.036 --rc genhtml_function_coverage=1 00:37:42.036 --rc genhtml_legend=1 00:37:42.036 --rc geninfo_all_blocks=1 00:37:42.036 --rc geninfo_unexecuted_blocks=1 00:37:42.036 00:37:42.036 ' 00:37:42.036 14:39:42 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:42.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.036 --rc genhtml_branch_coverage=1 00:37:42.036 --rc genhtml_function_coverage=1 00:37:42.036 --rc genhtml_legend=1 00:37:42.036 --rc geninfo_all_blocks=1 00:37:42.036 --rc geninfo_unexecuted_blocks=1 00:37:42.036 00:37:42.036 ' 00:37:42.036 14:39:42 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:42.036 14:39:42 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:42.036 14:39:42 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:42.036 14:39:42 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:42.036 14:39:42 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:42.036 14:39:42 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:42.036 14:39:42 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.036 14:39:42 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.036 14:39:42 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.036 14:39:42 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:42.036 14:39:42 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:42.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:42.036 14:39:42 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:42.036 14:39:42 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:42.036 14:39:42 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:42.036 14:39:42 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:42.036 14:39:42 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:42.036 14:39:42 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:42.036 14:39:42 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:42.036 14:39:42 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:42.036 14:39:42 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:42.036 14:39:42 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:42.036 14:39:42 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:42.036 14:39:42 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:42.036 14:39:42 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:42.036 14:39:42 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:42.036 14:39:42 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:42.036 /tmp/:spdk-test:key0 00:37:42.036 14:39:42 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:42.036 14:39:42 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:42.036 14:39:42 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:42.036 14:39:42 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:42.036 14:39:42 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:42.036 14:39:42 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:42.036 14:39:42 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:42.036 14:39:42 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:42.294 14:39:42 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:42.294 14:39:42 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:42.294 /tmp/:spdk-test:key1 00:37:42.294 14:39:42 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1946503 00:37:42.294 14:39:42 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1946503 00:37:42.294 14:39:42 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:42.294 14:39:42 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1946503 ']' 00:37:42.294 14:39:42 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:42.294 14:39:42 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:42.294 14:39:42 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:42.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:42.294 14:39:42 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:42.294 14:39:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:42.294 [2024-12-10 14:39:42.837332] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:37:42.294 [2024-12-10 14:39:42.837386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1946503 ] 00:37:42.294 [2024-12-10 14:39:42.919373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.294 [2024-12-10 14:39:42.961896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:43.227 14:39:43 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:43.227 14:39:43 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:37:43.227 14:39:43 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:43.227 14:39:43 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.227 14:39:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:43.227 [2024-12-10 14:39:43.661463] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:43.227 null0 00:37:43.227 [2024-12-10 14:39:43.693508] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:43.227 [2024-12-10 14:39:43.693791] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:43.227 14:39:43 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.227 14:39:43 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:43.227 455568795 00:37:43.227 14:39:43 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:43.227 810641546 00:37:43.227 14:39:43 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1946667 00:37:43.227 14:39:43 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1946667 /var/tmp/bperf.sock 00:37:43.227 14:39:43 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:43.227 14:39:43 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1946667 ']' 00:37:43.227 14:39:43 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:43.227 14:39:43 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:43.227 14:39:43 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:43.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:43.227 14:39:43 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:43.227 14:39:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:43.227 [2024-12-10 14:39:43.765524] Starting SPDK v25.01-pre git sha1 02d0d9b38 / DPDK 24.03.0 initialization... 00:37:43.227 [2024-12-10 14:39:43.765566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1946667 ] 00:37:43.227 [2024-12-10 14:39:43.842737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:43.227 [2024-12-10 14:39:43.881552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:43.227 14:39:43 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:43.227 14:39:43 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:37:43.227 14:39:43 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:43.227 14:39:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:43.484 14:39:44 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:43.484 14:39:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:43.742 14:39:44 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:43.742 14:39:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:44.000 [2024-12-10 14:39:44.545825] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:44.000 nvme0n1 00:37:44.000 14:39:44 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:44.000 14:39:44 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:44.000 14:39:44 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:44.000 14:39:44 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:44.000 14:39:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:44.000 14:39:44 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:44.257 14:39:44 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:44.257 14:39:44 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:44.257 14:39:44 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:44.257 14:39:44 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:44.257 14:39:44 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:44.257 14:39:44 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:44.257 14:39:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:44.515 14:39:45 keyring_linux -- keyring/linux.sh@25 -- # sn=455568795 00:37:44.515 14:39:45 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:44.515 14:39:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:44.515 14:39:45 keyring_linux -- keyring/linux.sh@26 -- # [[ 455568795 == \4\5\5\5\6\8\7\9\5 ]] 00:37:44.515 14:39:45 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 455568795 00:37:44.515 14:39:45 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:44.515 14:39:45 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:44.515 Running I/O for 1 seconds... 00:37:45.446 21701.00 IOPS, 84.77 MiB/s 00:37:45.446 Latency(us) 00:37:45.446 [2024-12-10T13:39:46.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:45.447 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:45.447 nvme0n1 : 1.01 21701.57 84.77 0.00 0.00 5878.84 4962.01 10797.84 00:37:45.447 [2024-12-10T13:39:46.187Z] =================================================================================================================== 00:37:45.447 [2024-12-10T13:39:46.187Z] Total : 21701.57 84.77 0.00 0.00 5878.84 4962.01 10797.84 00:37:45.447 { 00:37:45.447 "results": [ 00:37:45.447 { 00:37:45.447 "job": "nvme0n1", 00:37:45.447 "core_mask": "0x2", 00:37:45.447 "workload": "randread", 00:37:45.447 "status": "finished", 00:37:45.447 "queue_depth": 128, 00:37:45.447 "io_size": 4096, 00:37:45.447 "runtime": 1.005872, 00:37:45.447 "iops": 21701.568390411503, 00:37:45.447 "mibps": 84.77175152504493, 00:37:45.447 "io_failed": 0, 00:37:45.447 "io_timeout": 0, 00:37:45.447 "avg_latency_us": 5878.840495365493, 00:37:45.447 "min_latency_us": 4962.011428571429, 00:37:45.447 "max_latency_us": 10797.83619047619 00:37:45.447 } 00:37:45.447 ], 00:37:45.447 "core_count": 1 00:37:45.447 } 00:37:45.447 14:39:46 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:45.447 14:39:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:45.704 14:39:46 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:45.704 14:39:46 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:45.704 14:39:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:45.704 14:39:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:45.704 14:39:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:45.704 14:39:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:45.961 14:39:46 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:45.961 14:39:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:45.961 14:39:46 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:45.961 14:39:46 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:45.961 14:39:46 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:37:45.961 14:39:46 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:45.961 14:39:46 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:45.961 14:39:46 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:45.961 14:39:46 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:45.961 14:39:46 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:45.961 14:39:46 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:45.961 14:39:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:46.219 [2024-12-10 14:39:46.742424] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:46.219 [2024-12-10 14:39:46.742841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2558500 (107): Transport endpoint is not connected 00:37:46.219 [2024-12-10 14:39:46.743836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2558500 (9): Bad file descriptor 00:37:46.219 [2024-12-10 14:39:46.744837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:46.219 [2024-12-10 14:39:46.744848] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:46.219 [2024-12-10 14:39:46.744855] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:46.219 [2024-12-10 14:39:46.744864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:46.219 request: 00:37:46.219 { 00:37:46.219 "name": "nvme0", 00:37:46.219 "trtype": "tcp", 00:37:46.219 "traddr": "127.0.0.1", 00:37:46.219 "adrfam": "ipv4", 00:37:46.219 "trsvcid": "4420", 00:37:46.219 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:46.219 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:46.219 "prchk_reftag": false, 00:37:46.219 "prchk_guard": false, 00:37:46.219 "hdgst": false, 00:37:46.219 "ddgst": false, 00:37:46.219 "psk": ":spdk-test:key1", 00:37:46.219 "allow_unrecognized_csi": false, 00:37:46.219 "method": "bdev_nvme_attach_controller", 00:37:46.219 "req_id": 1 00:37:46.219 } 00:37:46.219 Got JSON-RPC error response 00:37:46.219 response: 00:37:46.219 { 00:37:46.219 "code": -5, 00:37:46.219 "message": "Input/output error" 00:37:46.219 } 00:37:46.219 14:39:46 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:37:46.219 14:39:46 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:46.219 14:39:46 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:46.219 14:39:46 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:46.219 14:39:46 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:46.219 14:39:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:46.219 14:39:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:46.219 14:39:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:46.219 14:39:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:46.219 14:39:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:46.219 14:39:46 keyring_linux -- keyring/linux.sh@33 -- # sn=455568795 00:37:46.219 14:39:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 455568795 00:37:46.219 1 links removed 00:37:46.219 14:39:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:46.219 14:39:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:46.219 14:39:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:46.219 14:39:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:46.219 14:39:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:46.219 14:39:46 keyring_linux -- keyring/linux.sh@33 -- # sn=810641546 00:37:46.219 14:39:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 810641546 00:37:46.219 1 links removed 00:37:46.219 14:39:46 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1946667 00:37:46.219 14:39:46 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1946667 ']' 00:37:46.219 14:39:46 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1946667 00:37:46.219 14:39:46 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:46.219 14:39:46 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:46.219 14:39:46 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1946667 00:37:46.219 14:39:46 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:46.219 14:39:46 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:46.219 14:39:46 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1946667' 00:37:46.219 killing process with pid 1946667 00:37:46.219 14:39:46 keyring_linux -- common/autotest_common.sh@973 -- # kill 1946667 00:37:46.219 Received shutdown signal, test time was about 1.000000 seconds 00:37:46.219 00:37:46.219 Latency(us) 00:37:46.219 [2024-12-10T13:39:46.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.219 [2024-12-10T13:39:46.959Z] =================================================================================================================== 00:37:46.219 [2024-12-10T13:39:46.959Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:46.219 14:39:46 keyring_linux -- common/autotest_common.sh@978 -- # wait 1946667 00:37:46.477 14:39:46 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1946503 00:37:46.477 14:39:46 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1946503 ']' 00:37:46.477 14:39:46 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1946503 00:37:46.477 14:39:46 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:46.477 14:39:46 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:46.477 14:39:46 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1946503 00:37:46.477 14:39:47 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:46.477 14:39:47 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:46.477 14:39:47 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1946503' 00:37:46.477 killing process with pid 1946503 00:37:46.477 14:39:47 keyring_linux -- common/autotest_common.sh@973 -- # kill 1946503 00:37:46.477 14:39:47 keyring_linux -- common/autotest_common.sh@978 -- # wait 1946503 00:37:46.735 00:37:46.735 real 0m4.843s 00:37:46.735 user 0m8.869s 00:37:46.735 sys 0m1.457s 00:37:46.735 14:39:47 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:46.735 14:39:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:46.735 ************************************ 00:37:46.735 END TEST keyring_linux 00:37:46.735 ************************************ 00:37:46.735 14:39:47 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:46.735 14:39:47 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:46.735 14:39:47 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:37:46.735 14:39:47 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:37:46.735 14:39:47 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:37:46.735 14:39:47 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:46.735 14:39:47 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:46.735 14:39:47 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:46.735 14:39:47 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:37:46.735 14:39:47 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:46.735 14:39:47 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:37:46.735 14:39:47 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:46.735 14:39:47 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:46.735 14:39:47 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:37:46.735 14:39:47 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:37:46.735 14:39:47 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:37:46.735 14:39:47 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:37:46.735 14:39:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:46.735 14:39:47 -- common/autotest_common.sh@10 -- # set +x 00:37:46.735 14:39:47 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:37:46.735 14:39:47 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:37:46.735 14:39:47 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:37:46.735 14:39:47 -- common/autotest_common.sh@10 -- # set +x 00:37:52.004 INFO: APP EXITING 00:37:52.004 INFO: killing all VMs 00:37:52.004 INFO: killing vhost app 00:37:52.004 INFO: EXIT DONE 00:37:55.298 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:37:55.298 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:37:55.298 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:37:55.298 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:37:55.298 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:37:55.298 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:37:55.298 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:37:55.298 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:37:55.298 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:37:55.557 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:37:55.557 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:37:55.557 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:37:55.557 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:37:55.557 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:37:55.557 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:37:55.557 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:37:55.557 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:37:55.557 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:37:58.840 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:37:58.840 Cleaning 00:37:58.840 Removing: /var/run/dpdk/spdk0/config 00:37:58.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:58.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:58.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:58.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:58.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:58.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:58.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:58.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:59.098 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:59.098 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:59.098 Removing: /var/run/dpdk/spdk1/config 00:37:59.098 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:59.098 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:59.098 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:59.098 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:59.098 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:59.098 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:59.098 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:59.098 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:59.098 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:59.098 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:59.098 Removing: /var/run/dpdk/spdk2/config 00:37:59.098 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:59.098 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:59.098 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:59.098 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:59.098 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:59.098 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:59.098 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:59.098 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:59.098 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:59.098 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:59.098 Removing: /var/run/dpdk/spdk3/config 00:37:59.098 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:59.098 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:59.098 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:59.098 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:59.098 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:59.098 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:59.098 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:59.098 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:59.098 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:59.098 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:59.098 Removing: /var/run/dpdk/spdk4/config 00:37:59.098 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:59.098 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:59.098 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:59.098 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:59.098 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:59.098 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:59.098 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:59.098 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:59.098 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:59.098 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:59.099 Removing: /dev/shm/bdev_svc_trace.1 00:37:59.099 Removing: /dev/shm/nvmf_trace.0 00:37:59.099 Removing: /dev/shm/spdk_tgt_trace.pid1436685 00:37:59.099 Removing: /var/run/dpdk/spdk0 00:37:59.099 Removing: /var/run/dpdk/spdk1 00:37:59.099 Removing: /var/run/dpdk/spdk2 00:37:59.099 Removing: /var/run/dpdk/spdk3 00:37:59.099 Removing: /var/run/dpdk/spdk4 00:37:59.099 Removing: /var/run/dpdk/spdk_pid1434355 00:37:59.099 Removing: /var/run/dpdk/spdk_pid1435412 00:37:59.099 Removing: /var/run/dpdk/spdk_pid1436685 00:37:59.099 Removing: /var/run/dpdk/spdk_pid1437110 00:37:59.099 Removing: /var/run/dpdk/spdk_pid1438050 00:37:59.099 Removing: /var/run/dpdk/spdk_pid1438281 00:37:59.099 Removing: /var/run/dpdk/spdk_pid1439240 00:37:59.099 Removing: /var/run/dpdk/spdk_pid1439246 00:37:59.099 Removing: /var/run/dpdk/spdk_pid1439595 00:37:59.099 Removing: /var/run/dpdk/spdk_pid1441289 00:37:59.099 Removing: /var/run/dpdk/spdk_pid1442354 00:37:59.099 Removing: /var/run/dpdk/spdk_pid1442705 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1442934 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1443235 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1443599 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1443810 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1444046 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1444332 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1445209 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1448237 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1448398 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1448533 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1448762 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1449234 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1449258 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1449744 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1449967 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1450226 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1450318 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1450502 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1450680 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1451076 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1451317 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1451610 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1456010 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1460758 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1471828 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1472378 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1477129 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1477589 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1482109 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1488445 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1491228 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1502364 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1512005 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1513815 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1514725 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1533482 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1537936 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1586950 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1592589 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1598905 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1605737 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1605743 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1606639 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1607539 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1608442 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1608908 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1608951 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1609261 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1609364 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1609370 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1610275 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1611175 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1612076 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1612536 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1612542 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1612854 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1613989 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1614986 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1624190 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1653712 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1658468 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1660228 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1661915 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1662091 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1662322 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1662343 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1662834 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1664650 00:37:59.357 Removing: /var/run/dpdk/spdk_pid1665596 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1665971 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1668183 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1668671 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1669381 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1673906 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1679762 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1679764 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1679765 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1684218 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1693619 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1698453 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1704887 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1706191 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1707723 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1709251 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1714217 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1719020 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1723337 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1731634 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1731731 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1736804 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1737030 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1737191 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1737501 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1737574 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1742450 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1743018 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1747821 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1750836 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1756673 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1762286 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1771442 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1779230 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1779232 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1800139 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1800762 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1801233 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1801783 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1802652 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1803120 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1803775 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1804261 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1808972 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1809248 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1815539 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1815804 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1821516 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1826222 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1836401 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1836866 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1841591 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1841878 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1847071 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1853152 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1855699 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1866556 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1876171 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1877751 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1878656 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1896695 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1900988 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1903648 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1911910 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1911980 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1917658 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1919447 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1921375 00:37:59.616 Removing: /var/run/dpdk/spdk_pid1922569 00:37:59.875 Removing: /var/run/dpdk/spdk_pid1924516 00:37:59.875 Removing: /var/run/dpdk/spdk_pid1925650 00:37:59.875 Removing: /var/run/dpdk/spdk_pid1935207 00:37:59.875 Removing: /var/run/dpdk/spdk_pid1935715 00:37:59.875 Removing: /var/run/dpdk/spdk_pid1936361 00:37:59.875 Removing: /var/run/dpdk/spdk_pid1939250 00:37:59.875 Removing: /var/run/dpdk/spdk_pid1939807 00:37:59.875 Removing: /var/run/dpdk/spdk_pid1940342 00:37:59.875 Removing: /var/run/dpdk/spdk_pid1944382 00:37:59.875 Removing: /var/run/dpdk/spdk_pid1944604 00:37:59.875 Removing: /var/run/dpdk/spdk_pid1946101 00:37:59.875 Removing: /var/run/dpdk/spdk_pid1946503 00:37:59.875 Removing: /var/run/dpdk/spdk_pid1946667 00:37:59.875 Clean 00:37:59.875 14:40:00 -- common/autotest_common.sh@1453 -- # return 0 00:37:59.875 14:40:00 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:37:59.875 14:40:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:59.875 14:40:00 -- common/autotest_common.sh@10 -- # set +x 00:37:59.875 14:40:00 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:37:59.875 14:40:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:59.875 14:40:00 -- common/autotest_common.sh@10 -- # set +x 00:37:59.875 14:40:00 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:59.875 14:40:00 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:59.875 14:40:00 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:59.875 14:40:00 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:37:59.875 14:40:00 -- spdk/autotest.sh@398 -- # hostname 00:37:59.875 14:40:00 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-03 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:00.134 geninfo: WARNING: invalid characters removed from testname! 00:38:22.053 14:40:21 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:23.431 14:40:23 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:25.478 14:40:25 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:27.379 14:40:27 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:29.279 14:40:29 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:31.181 14:40:31 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:32.556 14:40:33 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:32.556 14:40:33 -- spdk/autorun.sh@1 -- $ timing_finish 00:38:32.556 14:40:33 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:38:32.556 14:40:33 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:32.556 14:40:33 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:32.556 14:40:33 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:32.815 + [[ -n 1354894 ]] 00:38:32.815 + sudo kill 1354894 00:38:32.825 [Pipeline] } 00:38:32.839 [Pipeline] // stage 00:38:32.844 [Pipeline] } 00:38:32.859 [Pipeline] // timeout 00:38:32.864 [Pipeline] } 00:38:32.877 [Pipeline] // catchError 00:38:32.882 [Pipeline] } 00:38:32.896 [Pipeline] // wrap 00:38:32.902 [Pipeline] } 00:38:32.915 [Pipeline] // catchError 00:38:32.924 [Pipeline] stage 00:38:32.926 [Pipeline] { (Epilogue) 00:38:32.938 [Pipeline] catchError 00:38:32.940 [Pipeline] { 00:38:32.952 [Pipeline] echo 00:38:32.954 Cleanup processes 00:38:32.959 [Pipeline] sh 00:38:33.245 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:33.245 1958123 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:33.258 [Pipeline] sh 00:38:33.541 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:33.542 ++ grep -v 'sudo pgrep' 00:38:33.542 ++ awk '{print $1}' 00:38:33.542 + sudo kill -9 00:38:33.542 + true 00:38:33.553 [Pipeline] sh 00:38:33.839 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:46.047 [Pipeline] sh 00:38:46.332 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:46.332 Artifacts sizes are good 00:38:46.345 [Pipeline] archiveArtifacts 00:38:46.353 Archiving artifacts 00:38:46.507 [Pipeline] sh 00:38:46.811 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:46.824 [Pipeline] cleanWs 00:38:46.834 [WS-CLEANUP] Deleting project workspace... 00:38:46.834 [WS-CLEANUP] Deferred wipeout is used... 00:38:46.840 [WS-CLEANUP] done 00:38:46.841 [Pipeline] } 00:38:46.857 [Pipeline] // catchError 00:38:46.868 [Pipeline] sh 00:38:47.149 + logger -p user.info -t JENKINS-CI 00:38:47.157 [Pipeline] } 00:38:47.169 [Pipeline] // stage 00:38:47.174 [Pipeline] } 00:38:47.187 [Pipeline] // node 00:38:47.191 [Pipeline] End of Pipeline 00:38:47.227 Finished: SUCCESS